In defense of third-degree amendments

shutterstock_354714626

Senate majorities routinely restrict the ability of senators to participate in the legislative process.

The most common way they do so is when the majority leader fills the amendment tree to block senators from offering amendments. The maneuver prevents the underlying legislation from being changed and protects rank-and-file senators in the majority from having to cast votes that could be used against them in their future efforts to win re-election.

Yet senators do not need the majority leader’s permission to offer amendments to legislation pending on the Senate floor. Indeed, they can offer so-called third-degree amendments even though the amendment tree has been filled.

This confronts the majority leader with a unique challenge. If utilized on a regular basis, third-degree amendments could eventually undermine his ability to control what measures receive votes on the Senate floor. This would, by extension, weaken significantly his ability to prevent the underlying legislation from being changed and to protect rank-and-file members in the majority from voting on amendments.

Given this, some senators have opposed efforts by their colleagues to offer third-degree amendments. Their concerns are illustrated in the debate surrounding the effort by Sen. Ted Cruz, R-Texas, to offer a third-degree amendment in July 2015. In opposing the maneuver, Sen. Lamar Alexander, R-Tenn., warned his colleagues of the consequences that would result if they joined Cruz in voting to overturn the decision of the chair. Specifically, he made two claims regarding Cruz’s effort, and the tactic of offering third-degree amendments more broadly.

First, Alexander equated Cruz’s appeal with the nuclear option employed by Senate Democrats in November 2013. He suggested, “If…a majority of senators agree with the senator from Texas, the Senate will be saying that a majority can routinely change Senate rules and procedures anytime it wants on any subject it wants in order to get the result it wants.” Alexander’s goal was to link Cruz’s appeal with the effort of Senate Democrats to circumvent the filibuster for judicial and executive nominations on a simple-majority vote in the previous Congress; a move that had been widely criticized by Senate Republicans ever since. Doing so would make it less likely that Republican senators would vote to overturn the chair, regardless of how they felt about the substance of the underlying amendments.

Second, Alexander asserted that Cruz’s appeal would, if successful, “destroy a crucial part of what we call the rule of regular order in the U.S. Senate.” The consequence would be the creation of “a precedent that destroys the orderly consideration of amendments.” As such, he confidently predicted, “There will be unlimited amendments. There will be chaos.”

Notwithstanding Alexander’s reputation as an expert on the Senate’s rules, a closer examination of his two claims demonstrates that neither has much merit.

First, there are important distinctions between third-degree amendments and the nuclear option, even though both utilize the same mechanism (i.e., an appeal). Appealing the ruling of the chair that an amendment is not in order when the amendment tree has been filled is not synonymous with the nuclear option, because it does not violate the Standing Rules of the Senate. If successful, it would simply create a new precedent governing the amendment process. It would not violate any specific rule. The appeal would only be functionally equivalent with the nuclear option if the new precedent explicitly violated an existing provision of the Standing Rules. Otherwise, the creation of a new precedent on appeal is entirely consistent with Senate rules and past practices.

Second, a closer consideration of regular order in the context of the amendment process suggests that it would remain relatively unaffected by a successful appeal in this scenario. Alexander contends that the amendment trees make it possible for the Senate to function today. He predicts that floor debate on bills would be chaotic if the current amendment trees were altered by a successful appeal. The implication is that effectively removing the limits on the number of amendments that can be pending to legislation on the Senate floor would make it impossible to consider legislation in an orderly manner.

Yet the historical development of the Senate’s amendment process demonstrates that there is nothing inherently chaotic about expanding the number of amendments that can be pending simultaneously. The principles of precedence would still apply to any new branches created on the trees. As such, the framework for the orderly consideration of the pending amendments would be preserved.

Moreover, the only time the amendment trees are adhered to literally in the contemporary Senate is almost always when the majority leader would like to block other senators from offering amendments. Instead of processing amendments by following the amendment trees, the practice most often followed is to process amendments by unanimous consent (e.g., “I ask unanimous consent to set aside the pending amendment and call up amendment No. 1234.”) Thus, limiting the majority leader’s ability to fill the amendment tree would simply force the Senate to return to the way in which it routinely processed amendments before the dramatic abuse of the amendment tree.

Indeed, the Senate has considered legislation for most of its history without utilizing the contemporary practice of routinely filling the amendment tree for the explicit purpose of blocking individual senators from offering their own amendments. While preventing the majority leader from being able to fill the tree routinely may make it more difficult for the Senate to block votes on amendments altogether, the Standing Rules and the institution’s precedents contain several tools that can be used to facilitate the orderly consideration of amendments on the Senate floor. These include (but are not limited to) the requirement that committee amendments to reported legislation be considered before the consideration of amendments from the floor, precedents prohibiting language previously amended from being amended again and the filing deadlines associated with Rule XXII.

The arguments advanced by proponents and opponents of using third-degree amendments to circumvent the majority leader’s ability to fill the amendment tree suggest two very different directions for the future course of the Senate’s development.

On one hand, equating precedents that fill in the gaps where the rules are silent with the Standing Rules would effectively bind the Senate to how it operated in the past, regardless of the development of new circumstances, the way the original precedent was established or the merits of the original precedent and whether it violated the Standing Rules in the first place. This would further increase the majority leader’s control over Senate decisionmaking by delegitimizing the efforts of individual members to adjudicate precedent or to protest what they perceived to be unfair or inaccurate rulings of the chair.

On the other hand, third-degree amendments could eventually undermine the majority leader’s ability to control the amendment process. Challenging the ability to fill the amendment tree with a third-degree amendment thus has the potential to impose significant costs on the majority leader directly. If used on a routine basis, this tactic could weaken, or even end, the majority’s ability to control outcomes in the Senate. As such, third-degree amendments could substantially alter the balance-of-power between the majority and minority parties in the institution, as well as between individual senators and the party leadership.


Image by Crush Rush

 

Senate finally poised to restore FTC to full strength

shutterstock_184999244

Earlier today, President Donald Trump formally announced the three candidates he’s nominating for the open seats at the Federal Trade Commission. Joseph Simons, Rohit Chopra, and Noah Phillips have diverse backgrounds and divergent political views, but they all have impeccable legal credentials and should be confirmed by the U.S. Senate without hesitation.

Not only will their confirmation put three more sets of steady hands at the wheel of the nation’s chief consumer protection and antitrust agency, but it also will finally restore the FTC to full strength, freeing it up to once again take on the kinds of hard cases that tend to split public opinion.

The FTC, which has jurisdiction over nearly every sector of the U.S. economy (with only a few limited exceptions), has had only two commissioners for most of 2017, ever since outgoing Chairwoman Edith Ramirez resigned in early February. To their credit, Acting Chairwoman Maureen Ohlhausen and Commissioner Terrell McSweeny have done an admirable job finding common ground and working together where possible, including by blocking an allegedly anticompetitive merger in daily fantasy sports, imposing structural-separation requirements on a key merger in the semiconductor industry, settling a privacy suit against a major ridesharing service and, most recently, launching an investigation into the Equifax breach.

However, with a partisan deadlock in place, the commission has only been able to act when it had unanimous consent. This has left it unable to tackle difficult questions that truly push the bounds of precedent and drive the evolution of legal doctrine forward. By all accounts, Simons, Chopra and Phillips are all FTC scholars who should be ready to hit the ground running on day one. Each of them also has relevant personal experience that should hold them in good stead at the commission.

Joseph Simons, long-rumored to be Trump’s pick for FTC chairman, comes most recently from the antitrust group at law firm Paul Weiss. He also spent time as director of the FTC’s Competition Bureau in the early 2000s, working deeply on both mergers and other enforcement actions. Given the uptick in merger activity this year, Simons’ experience in this area will surely come in handy at the FTC, which has a key role to play, along with the U.S. Justice Department, in reviewing proposed mergers and acquisitions to prevent potential harms to competition or consumers.

Rohit Chopra, the pick to fill the open Democratic slot, also has significant prior experience in the federal government. He served as assistant director of the Consumer Financial Protection Bureau and in 2011 was named by former Treasury Secretary Timothy Geithner to be the U.S. Treasury Department’s first student loan ombudsman. Chopra is considered a darling of key Democrats like Senate Minority Leader Chuck Schumer, D-N.Y., and Sen. Elizabeth Warren, D-Mass., for his efforts to combat student loan debt and other financial burdens affecting young people. While his stance on for-profit colleges may rankle some Senate Republicans, there is no reason to think he won’t be confirmed. After all, disagreements over policy aren’t a valid reason to deny confirmation of a qualified nominee (although members of both parties tend to forget that from time to time).

Finally, Noah Phillips was nominated to fill the final Republican vacancy at the FTC, and he also brings a decorated and interesting background to the table. Phillips previously spent time in civil litigation for both Steptoe & Johnson and Cravath, Swaine & Moore, but most recently has been serving as chief counsel for Senate Majority Whip John Cornyn, R-Texas, with the Senate Judiciary Committee. From his post on the Judiciary Committee, Phillips has oversight of the U.S. legal system as well as intellectual property, which should come in handy as the FTC continues to engage in more patent work, such as its review of patent assertion entities and its ongoing case alleging anticompetitive abuse of patents underlying equipment used in smartphones.

With a full complement of qualified commissioners, the FTC can once again function as an agency with the skills and capacity to tackle the key competition and consumer-protection issues. The Senate shouldn’t delay to confirm all three nominees.


Image by Kevin Grant

 

Anxiety over NAFTA causing slide of the peso, and an increase in imports from Mexico

President Donald Trump and U.S. Trade Representative Robert Lighthizer have made reducing the trade deficit a central focus of the in-progress renegotiation of the North American Free Trade Agreement.

Last weekend’s round of negotiations in Washington, D.C., ended on a fairly sour note. As optimism about a reinvigorated NAFTA 2.0 fades, economic anxiety in Mexico is putting downward pressure on the peso, according to the Wall Street Journal.

As the Peso declines versus the dollar, imports from Mexico become cheaper. As a result, our bilateral trade deficit with Mexico will expand! In other words, even if we withdrew from NAFTA and tariff rates spiked, the bilateral trade deficit with Mexico could still increase.

As virtually any economist worth his or her salt will tell you, trade deficits are driven by larger macroeconomic factors beyond trade policy. It is unlikely that trade deficits matter at all, but it is certain that bilateral trade deficits do not matter. The United States, for instance, has a trade surplus with Australia, which has a surplus with China, which has a surplus with the United States.

The sooner the Trump administration and the leadership at USTR recognize that attempting to address bilateral trade deficits through trade policy is a futile exercise, the sooner real progress on negotiation will be made.

The Russia investigation: Why the overseers need oversight

shutterstock_639962236

What’s going on with the Russia investigation? For most of us, the answer likely is, “Beats me.”

It seems every week or two there’s a media report about Congress holding a hearing or some member of Team Trump or other person being called in to testify: James Comey, Paul Manafort, Donald Trump Jr. The facts come out in drips: one reads of meetings with Russians during this past year’s presidential election; Facebook turning over information about shady campaign ads; Michael Flynn and possibly his son being subpoenaed to produce documents.

There are five congressional committees involved, to say nothing of independent counsel Robert Mueller. Who is doing what, when and why is anything but obvious.

Especially concerning is that Congress’ inquiries are increasingly viewed through partisan lenses. CNN reports:

In the House and Senate, several Republicans who sit on key committees are starting to grumble that the investigations have spanned the better part of the past nine months, contending that the Democratic push to extend the investigation well into next year could amount to a fishing expedition. The concerns are in line with ones raised by President Donald Trump, who has publicly and privately insisted he’s the subject of a ‘witch hunt’ on Capitol Hill and by special counsel Robert Mueller. Democrats, meanwhile, are raising their own concerns that the congressional Russia probes are rushing witnesses – including the testimony of President Donald Trump’s son-in-law Jared Kushner – as well as stalling appearances of other key Trump associates.

President Trump often has denounced the Russia issue as a hoax, and some of his supporters view it as a Democratic-media-deep-state “witch hunt” and fishing expedition. On the left, one still hears griping that Russian hackers helped Trump to steal the election and that Republican congressional majorities will hide any revelations of serious wrongdoing by the president or his campaign.

Desperately needed is something to bolster faith in the process. If the Russia investigation turns out to be a big nothingburger, then the country benefits if that conclusion is broadly accepted. And if there really is a there there, then it could lead to impeachment or other severe consequences, which, again, will require collective faith that the process is fair.

To raise credibility, Congress should adopt the benchmarks advocated by a right-left coalition of former government officials and policy wonks. In short, each of the committees (Senate Select Committee on Intelligence; Senate Judiciary Committee; House Permanent Select Committee on Intelligence; House Committee on Oversight and Government Reform; and House Judiciary Committee) should commit to carry out their work in ways that demonstrate bipartisanship and the desire to keep the public informed.

So, speaking to the former matter, committee chairmen and ranking members should jointly hold press conferences, and issue public communications under both their names. When calling witnesses or demanding documents, both the majority and minority should consent.

To increase the public’s understanding, the committees should report publicly and regularly on basic aspects of the investigation: What’s the scope of the investigation? How many witnesses have been interviewed? How many hearings (open or closed door) have been held? How much has been spent?

That is not much to ask of Congress, but the benefits could prove immense. A big part of the glue that holds us together as a nation is acceptance of the legitimacy of government. With the presidency itself at the center of the investigation, the stakes are very high.


Image by Lightspring

 

How senators can offer amendments without the majority leader’s permission

shutterstock_721330684

The demise of regular order in the Senate makes it harder for its members to participate in the legislative process. And the result of their efforts to do so gives rise to a destructive cycle that perpetuates dysfunction and gridlock.

While regular order is not easily defined, it is generally associated with an orderly process in which senators are able to participate at predictable points. Conversely, its absence is typically associated with a secretive process in which members are barred from offering amendments to legislation pending on the Senate floor. When confronted with legislation in such a process, senators are left with no choice but to “blow up” the bill to force the majority to allow them to offer amendments. This all-or-nothing approach breeds frustration among members and their constituents, thereby making it even harder to negotiate after the majority’s original plan has been thwarted.

Given this dynamic, irregular order is hardly the most productive way to make decisions. Instead of helping senators communicate across their differences, it encourages the kind of extreme position-taking and inflexibility that complicates a more deliberative process.

It should thus be no surprise that the Senate at present has difficulty passing legislation of any consequence and that its amendment process is in shambles. This is because the majority leader routinely blocks amendments and files cloture on important bills as soon as they are placed on the Senate floor. The only leverage senators have in such a scenario is their ability to block cloture on the underlying legislation.

Fortunately, there is another way for senators to amend bills on the floor without the majority leader’s permission to offer amendments. They can offer third degree amendments even when the tree has been filled and then appeal the subsequent ruling of the Senate’s presiding officer (i.e., chair) that the amendment is not in order. Doing so can force a recorded vote in relation to the amendment. The majority can prevent a vote on the appeal by filibustering it. Yet the majority’s filibuster would also prevent its bill from passing.

Offering a third-degree amendment in this scenario is consistent with the Senate’s rules and precedents as reflected in the historical development of its amendment process. It also reinforces a common minority critique of how the majority party runs the Senate. Most importantly, the tactic makes it easier for senators to participate in the legislative process, thereby avoiding the destructive cycle created by forcing them to block cloture on a bill just to get the opportunity to offer an amendment to it.

The Senate’s Standing Rules do not regulate the number of amendments that members are currently allowed to offer to legislation at the same time. Instead, that is governed by the four amendment trees followed in the Senate today. Those trees were created by precedent and evolved over time, only recently reaching their current shape.

Yet their evolution was not haphazard. The precedents that created the modern trees are based on general parliamentary law and serve to facilitate the orderly consideration of amendments on the Senate floor. For example, one precedent precludes so-called third-degree amendments. Specifically, the early Senate prohibited vertical third degree amendments (i.e., an amendment to an amendment to an amendment to the underlying legislation) and horizontal third degree amendments (i.e., a competing first- or second-degree amendment to the underlying legislation) because their use would make the floor debate on a bill too confusing.

In other words, the original prohibition on third degree amendments was not intended to block senators from offering amendments altogether. Rather, the expectation was that while a third-degree amendment would be out of order, an identical first- or second-degree amendment would be allowed once that branch on the tree opened.

Even so, senators soon realized that the amendment process was too cumbersome when the prohibition was applied strictly. As a result, the Senate facilitated more member participation and deliberation by expanding the amendment trees over time to permit vertical and horizontal third degree amendments where they had been previously prohibited. The primary motivation behind each expansion was the desire to make the amendment process more responsive to the needs of individual senators.

While the majority leader uses the same amendment trees today to block all amendments, senators retain the option to expand them again to make it easier to participate in the process and to increase deliberation. That is, they can offer their amendments even though the amendment tree has been filled.

The Senate’s precedents stipulate that “Any senator recognized is entitled to offer an amendment when such amendment is otherwise in order, but he cannot offer an amendment unless he has been recognized or has the floor.” The process of filling the tree follows precedent to block members from offering their own amendments. However, a senator may attempt to offer an amendment even though the tree has been filled. In such a situation, the chair would rule that the amendment is not in order pursuant to the Senate’s precedents. At that point, the member could appeal the ruling of the chair and request a recorded vote. The appeal represents an adjudication of the italicized portion of the precedent quoted above; namely, that an amendment is in order even though the amendment tree has been filled.

Offering amendments despite the filled tree and appealing the ruling of the chair that they are not in order forces the majority to cast votes on procedural questions directly related to the amendment being offered. Procedural votes have been viewed as substantive votes when the question is directly related to the underlying policy and the tactic is utilized on a regular basis. For example, the perception of cloture has evolved from being simply a procedural vote to the point that it is viewed by many as a substantive vote today. Votes on third degree amendments could thus be characterized as substantive votes.

As such, the threat to offer a third-degree amendment may encourage the majority to return to regular order. This is because the tactic gives the minority more leverage with which to gain the right to offer amendments without having to block cloture.


Image by mark reinstein

Steven Greenhut on American Family Radio

American Family Radio host Chris Woodward interviews R Street Western Region Director Steven Greenhut on the latest goings-on in California’s state Capitol. Woodward and Greenhut discuss the possible impact of the Trump administration’s tax plan, which will remove a key deduction that benefits Californians. This plan puts California Republicans in a tight spot, given that they want to support the president but it means a tax hike for California taxpayers. Woodward also asks Greenhut about a proposal to ban the sale of internal-combustion-engine vehicles by 2040 — something Greenhut explains is more about posturing than anything else given the rapid technological advancements in the auto industry.

Blocking amendments is a perversion of Senate rules and practices

Senate Holds Vote On Financial Bailout Legislation

The Senate today is an institution in decline. It is paralyzed – unable to legislate, much less deliberate.

The Senate’s plight is reflected in the near-total deterioration of its amendment process.

For example, senators offered a paltry 147 floor amendments between January and September of this year. Compare that to the 568 amendments they offered during the same period in 2015 and the 668 in 2009. At the present rate, Senate amendment activity could increase by as much as 250 percent over the next 15 months and still fall short of the level observed in the first nine months of 2015 alone.

This is the culmination of a broader trend going back three decades. During that time, Senate majorities have increasingly empowered the institution’s majority leader to prevent senators from offering amendments to achieve their legislative priorities.

The majority leader blocks senators from offering alternative proposals by filling the amendment tree, i.e., offering the maximum allowable number of amendments to legislation before other senators have had a chance to debate the measure and offer their own amendments.

Once used sparingly in extraordinary circumstances, the tactic is now routine and well-documented. But less appreciated is the extent to which its normalization in recent years represents a radical break from the Senate’s past practice. Also, less understood is how precisely the tactic empowers the majority to pass its agenda, given that the minority can still filibuster the underlying legislation.

Recent research suggests that the amendment process gradually evolved to facilitate the orderly consideration of the Senate’s business. The direction in which it evolved was informed by the Senate’s effort to balance the need for order in its work with the imperative of legislative deliberation.

While the Senate’s first amendment trees only permitted two amendments to be pending at the same time, they were expanded in response to member demands by adding new branches. The result was to increase the number of amendments that could be pending before the Senate simultaneously.

Notwithstanding this increase, members maintained order by adhering to the principles of precedence first compiled for the Senate in Thomas Jefferson’s A Manual of Parliamentary Practice for the Use of the Senate and still followed today. In general, those principles held that senators should have an opportunity to amend legislative text proposed to be stricken and/or inserted before the actual vote to strike and/or insert said text.

Analyzing how the Senate’s current amendment trees came to be underscores the extent to which using them to block amendments is a perversion of the Senate’s rules and practices. That is, the precedents underpinning the trees are now being used for a purpose fundamentally at odds with the one for which they were first created. Instead of facilitating the orderly consideration of amendments on the Senate floor, they are now being used to block the consideration of amendments altogether.

This suggests that the act of offering amendments no longer serves as a way in which the Senate can arrive at a greater understanding of what its members think about a given issue. Instead, the amendment process is commonly viewed as the last hurdle needed to be surmounted before a preferred bill can be sent to the House or to the president’s desk to be signed into law. To the extent that controversial amendments are permitted on legislation, frequently their consideration is structured in such a way as to guarantee their defeat. This requires channeling all decisions regarding which amendments can be offered to legislation through a single veto point (i.e., the party leaders or bill managers). Once established, such a veto point enables the leadership and/or bill managers to exercise disproportionate control over which amendments will be made pending to legislation on the Senate floor and to set the terms according to which those amendments will be disposed of.

Establishing a veto point is accomplished by putting the Senate in a parliamentary situation in which unanimous consent is needed to get an amendment pending under one of the four amendment trees. The primary tool utilized by the majority leader to accomplish this is the tactic of filling the amendment tree (or offering a blocker amendment in one of the available slots such that further amendments are precluded by the principles of precedence if that blocker amendment is pending). No amendments are in order once all the extant branches on the tree are occupied. At that point, the majority leader and/or bill manager is free to focus on negotiations with interested rank-and-file colleagues to reach a unanimous consent agreement that provides for several amendments and a vote on final passage without having to worry about a senator jeopardizing the legislation’s prospects by offering a controversial or otherwise unwanted amendment without permission.

Wallner+Senate+1+chart

As noted, the majority leader (or bill manager) may also offer a “blocker” amendment to establish the veto point. For example, an amendment offered to branch C in the chart above would serve as a blocker amendment if offered first and in the form of a motion to insert (or strike and insert). Once pending, any other amendment offered directly to the amendment in the nature of a substitute (ANS) would require consent to get pending (which would presumably be denied if the majority leader/bill manager wanted to block the amendment).

This tactic is less aggressive than completely filling the amendment tree, in that it typically leaves a few branches open for possible amendment. However, these branches are rarely connected to the ANS directly. For example, in the hypothetical example, the blocker amendment leaves branches E and F (on the left side of the amendment tree) open. Branch D (second degree to C on the right side) is also left open. These branches do not present the same challenges to proponents of the bill because their impact would be minimal if the amendments pending there prevailed. The majority leader could move to table C to prevent a vote on D on the right side of the tree. Additionally, adoption of E and F on the left side of the tree would be negated once the Senate adopts the ANS.

Once the Senate is in a parliamentary situation in which unanimous consent is needed to get an amendment pending to legislation on the floor, the majority leader can use his increased leverage to secure a higher vote threshold for adoption of an amendment. The majority’s desire to limit the minority’s ability to attach what it considers poison-pill amendments to legislation it supports is thus reflected in the dramatic increase in the use of unanimous consent agreements to set 60-vote thresholds for adopting amendments.  The majority leader uses the threat of not allowing amendments to get pending to compel individual senators to agree to the higher vote threshold on their amendment, even though doing so means that the amendment will most likely be rejected.

The routine practice of filling the amendment tree in the Senate today, coupled with the cloture process to end debate, effectively prevents members from being able to perfect legislation before it receives an up-or-down vote on final passage. Instead of a deliberative process designed to discern the true sense of the institution’s membership on an issue, senators are confronted with a fait accompli. This practice is inconsistent with the longstanding rules and practices on which the amendment process is based.

Structural imbalances in the Senate’s amendment process

U.S. Capitol Building, Washington D.C.

The Senate is a pale imitation of what it once was.

A major reason for its current predicament is that senators are no longer freely able to amend the bills they consider. This is because the majority leader routinely blocks members from offering their own ideas on the Senate floor by filling the amendment tree.

While the tactic effectively precludes votes on unwanted amendments, the minority may still filibuster the underlying legislation in protest. This gives Senate minorities leverage to negotiate with the majority over what amendments will be permitted during a bill’s consideration, so long as 41 of its members are committed to blocking cloture until their demands are met.

But remaining united in opposition to cloture is not always easy, because the minority comprises individual senators who hold an array of policy views. Given this, the majority leader will negotiate directly with those members whose policy views are closest to his own when trying to secure the votes needed to invoke cloture.

The majority leader also can structure an amendment’s consideration in a way that makes its success less likely. This is done by setting a higher threshold for the amendment’s adoption in the unanimous consent agreement that typically schedules the vote on it. The utility of this approach to Senate majorities is reflected in the dramatic increase in its use in recent years to set 60-vote thresholds for passing amendments.

Wallner+2+amendments+60+votes+

The earliest documented use of such a consent agreement occurred in the 102nd Congress. But it was a rare procedural tool until the 109th and 110th Congresses, when Majority Leaders Bill Frist, R-Tenn., and Harry Reid, D-Nev., respectively, began utilizing them on an increasing scale. In the 109th Congress, consent agreements were used in this manner in six instances. However, in the 110th Congress, their use increased significantly, totaling 37 instances. The use of the tactic remained relatively level in the 111th Congress at 38. In the 112th Congress, 60-vote thresholds were set on amendments on a staggering 111 occasions.

The tactic was utilized 35 times in the 113th Congress. The decline in amendments subject to a 60-vote threshold from the 112th to the 113th Congress is not as abrupt when viewed as a percentage of all amendments offered. This is because only 542 amendments were offered to legislation on the Senate floor during the 113th Congress (compared to 974 in the 112th).

Moreover, the share of roll call votes (RCVs) on amendments set at 60 by consent has increased since the 109th Congress. The routine utilization of the 60-vote threshold is particularly striking when RCVs on amendments to the budget resolution and reconciliation bills are omitted. Excluding budget and reconciliation amendments from the count yields a more accurate portrayal of the tactic’s centrality to decisionmaking in the Senate at present, because a member cannot be blocked, in theory, from offering amendments during the budget process’s vote-a-rama.

Wallner+2+amendments+percentage

Pursuant to these unanimous consent agreements, the amendment is withdrawn if it does not get the requisite number of votes. The practice thus allows an amendment’s supporters to demonstrate support for cloture without going through the time-consuming process of invoking it.

Amendments offered pursuant to such agreements, however, are seldom successful. In the 109th and 110th Congresses, amendments considered in this manner failed 100 percent and 78 percent of the time, respectively. In the 111th and 112th Congresses, the percentages of amendments considered in this manner that failed were 61 percent and 87 percent, respectively. Most recently, 77 percent of the amendments considered pursuant to this tactic failed in the 113th Congress.

The use of unanimous consent agreements to set 60-vote thresholds on amendments can thus be interpreted as allowing the majority to facilitate the passage of legislation by allowing the minority to offer amendments without risking the adoption of a poison pill. This process does not present a problem for members of the majority party because they typically oppose the amendment in question, and a 60-vote threshold means that it is unlikely to pass. In addition, members of the majority are more likely to have their priorities included in the underlying bill before it reaches the Senate floor for consideration.

Minority party members, as well as those in the majority party who are out of step with their colleagues on the policy question at hand, often support this process begrudgingly because it provides an opportunity to offer an amendment and get a vote on it, all without having to expend the necessary resources to filibuster the underlying legislation. They may not get the opportunity to offer the amendment altogether if they reject the 60-vote threshold.

Setting 60-vote thresholds for amendments via unanimous consent agreements is central to the majority’s ability to control the agenda in the Senate today. Yet the tactic’s increased use in recent years is at odds with calls to reform, or eliminate, the legislative filibuster. This suggests that there is a growing constituency inside the Senate to increase the majority’s ability to control the legislative process while reducing the minority’s ability to leverage the filibuster to secure majority concessions. If this trend persists, the Senate risks becoming more majoritarian, and thus more dysfunctional, moving forward.

Ranking Member Cummings cites Lehrer on census

House Oversight Committee Ranking Member Elijah Cummings, D-Md., cited a recent op-ed by R Street President Eli Lehrer in his opening remarks at the panel’s Oct. 12 hearing on the 2020 U.S. Census.

EPA ends ‘sue and settle’ era

shutterstock_253697599

A new directive handed down Oct. 16 by Environmental Protection Agency Administrator Scott Pruitt pledges to put an end to the controversial practice of settling lawsuits with special interest groups behind closed doors, often while paying their attorneys’ fees.

These so-called “sue and settle” practices long have been criticized by businesses and conservative groups as a way to circumvent the normal regulatory process. Over its eight years, the Obama administration’s EPA chose not to defend itself in more than 100 lawsuits brought by special interest advocacy groups and paid out $13 million in attorneys’ fees in such cases.

Pruitt has had the tactic in his sights since his days as Oklahoma’s attorney general, when he sued the EPA in federal court more than a dozen times. In a letter this week to EPA managers, he said the practice “risks bypassing the transparency and due process safeguards enshrined in the Administrative Procedure Act and other statutes.” He also called it “regulation through litigation” and an “abusive” policy, in part because it excludes state involvement in any settlement between the EPA and private litigants.

The practice has not been confined just to the Obama administration, as the Bush EPA settled 64 cases over its two terms in office. But during the Obama years, “sue and settle” became one of the primary avenues to formalize major regulations, including the Clean Power Plan’s proposed constraints on carbon emissions as well as recent mercury and air-toxin standards.

Pruitt’s directive calls for improved transparency around litigation, with all potential settlement agreements open to a 30-day public comment period. The directive also calls for publishing attorneys’ fees, a break from the Obama administration practice of agreeing to fees “informally.” Pruitt also has instructed the EPA to reach out directly to states and regulated entities that would be affected by any given consent decree.

Given the litigiousness of environmental policy, it’s easy to see how the “sue and settle” process could be attractive for the agency. But as Pruitt rightly suggests, the process had become a way to circumvent the full regulatory process, which can take years, and essentially gives the executive branch control to shape legal settlements in complaints that are never even heard by the courts.

Given the Obama administration’s clear tendency to replace legislative compromise with “phone and a pen” executive action, there is little doubt the “sue and settle” tactic was being abused in ways that had not be foreseen when the practice began. Good riddance.


Image by petrmalinak

 

Local e-cigarette crackdowns are misguided and counter-productive

Electronic Cigarette or E-Cigarette

In an unfortunate trend across the country, cities and towns have raced to institute new regulations and update existing laws that deal with e-cigarettes and vapor products, often with little consideration of the potential these products have to improve public health.

In Massachusetts, recent actions by local boards of health to label e-cigarettes as “tobacco products” are misleading, at best, and at worst, a move that limits access to far less-harmful alternatives to cigarettes. Many local policies aimed at protecting teens from smoking myopically disregard the effects on the adult smoking population.

Tobacco harm reduction is an approach to public health that seeks to reduce the incidence of cigarette use and smoking-related diseases by encouraging smokers to switch to less-harmful alternatives. These include e-cigarettes, vapor products and certain smokeless tobacco products that, while not completely without risk, are orders of magnitude less harmful to a person’s health than their combustible cousins.

Historically, American tobacco control policy has been based on the premise that all tobacco products are hazardous and that none can offer personal or public health benefits. However, peer-reviewed research by the United Kingdom’s Royal College of Physicians has demonstrated that e-cigarettes are a significantly safer than cigarettes, which continue to be both the most widely used and the most harmful tobacco products on the market.

That work by the Royal College of Physicians is particularly notable in light of the fact that it was they, decades ago, who presented the first comprehensive study on the negative health impact of cigarette use.

More recently, in the United States, Food and Drug Administration Commissioner Scott Gottlieb echoed similar sentiments in a recent Washington Post interview. Gottlieb noted that most e-cigarettes contain nicotine, a known addictive substance, but the real threat to humans are the carcinogens produced when tobacco is combusted. Electronic nicotine delivery systems, or “ENDS,” provide a safer alternative for adults who still want access to nicotine but avoid that mass of carcinogens.

While the relative safety of noncombustible products is not in doubt, many local boards of health continues to resist their use out of fear that they may lead to heightened incidence of tobacco use among teens. In particular, there are concerns that “flavored products” attract teens to smoking. In response, localities have issued broad prohibitions on the sale of such products, without differentiating between cigarettes and less-harmful alternatives. Recently in Massachusetts, the towns of Canton and Marion and the city of Gloucester all have considered regulations that, if approved, will greatly reduce access to a host of less-harmful, non-combustible alternatives.

The unintended consequence of such rules is that they could make those who already smoke less likely to transition away from cigarettes. Furthermore, a recent study by Saul Shiffman and colleagues that examined flavor preferences among adolescent nonsmokers found they had less interest in supposedly youth-targeted e-cigarette flavors than adult smokers. In fact, the study concluded that teens preferred flavors that seemed more “adult-like.” Thus, not only do flavor bans not have their desired effect of preventing teens from smoking, they actually make it more difficult for adult smokers to improve their health. That’s bad policy.

A holistic approach to harm reduction demands that, in addition to discouraging adolescents from nicotine and cigarette use, a significant goal of any tobacco regulation should be to encourage adult smokers to switch to safer alternatives. Greater flavor options provide smokers with more paths away from the most harmful and widely used tobacco products – cigarettes. Taking steps to make e-cigarettes less accessible to current and future smokers means failing to make progress on reducing future rates of smoking-related diseases, which collectively kill 480,000 people in the United States each year.

By focusing solely on minors, many of these local regulations disregard and discount cigarette use among adults. The measure of a successful public health policy should be the impact it has on the whole population, not just certain segments. While cigarette use in the United States is at an all-time low, the significant drop-off in smoking rates is due, at least in part, to the development of attractive (and much safer) alternatives.

Harris deserves praise for seeking middle ground on sex-trafficking bill

shutterstock_680530741

The Sacramento Bee implied in a recent article that Sen. Kamala Harris, D-Calif.,  was being inconsistent or unduly influenced by Silicon Valley campaign supporters for her reluctance to back this far-reaching bill. But Harris’ approach of finding a middle ground is the only sensible course, especially given the potential harm to internet speech that could result from a hastily drafted law.

It’s really tough to stand up to “mom and apple pie” legislation such as this bill. Indeed, that’s why the “Stop Enabling Sex Traffickers Act” is the most dangerous sort of legislation, in that it uses legitimate fears of the scourge of sex trafficking to grant the government newfound powers to shut down online speech.

It also grants attorneys the ability to sue website operators, search engines, email providers and other online players into oblivion. Is it any wonder the president of a trial-lawyer-backed “consumer advocacy group,” Consumer Watchdog, was quoted by the Bee favoring the bill? The act would certainly be good for the trial bar given that it would obliterate longstanding federal protections for those web-based “intermediaries” that host third-party online speech.

Thanks to Section 230 of the federal Communications Decency Act of 1996, Facebook, Google and even the Bee itself are limited in their liability for the posts, images and comments made on their sites. In the name of combating sex trafficking, this bill would eviscerate those protections by opening up intermediaries to federal criminal prosecution and civil liability.

“Without this protection, intermediaries would face a potential lawsuit in each one of the thousands, millions or even billions of posts, images and video uploaded to their services every day,” according to a letter that privacy groups, including the American Civil Liberties Union, sent to the U.S. Senate leadership in August. Intermediaries would “err on the side of caution” and face an unending sea of litigation – something that will dangerously constrict speech on the internet.

It’s unclear what exact middle ground Harris is seeking, but there’s certainly nothing wrong with her listening to Bay Area tech firms on an issue that intimately involves them – and us. Sure, Harris seems to have changed her position from her days as attorney general, when she filed pimping charges against a website’s operator. A judge later tossed those charges for many of the same reasons free-speech advocates oppose this bill.

We should all be happy that Sen. Harris is growing in office. By all means, let’s clamp down on the human filth who operate as sex traffickers – but without threatening the kind of online free speech we’ve all come to expect on the internet.


Image by Vince360

 

Why do liquor rules vary drastically from state to state?

The R Street Institute’s Jarrett Dieterle appeared on Fox 5 DC’s “The Final Five” with Jim Lokay to discuss booze policy in America. They discussed the difficulty in reforming onerous state alcohol laws and how R Street’s DrinksReform.org website is helping to track reform efforts across all 50 states.

Perry questions value of ‘free market’ in energy

Also appeared in: Red, Green and Blue

shutterstock_281979875

Shakespeare’s adage about those who “doth protest too much” seems an appropriate response to Energy Secretary Rick Perry’s recent testimony on an administration proposal to change the way coal and nuclear power plants are compensated for sending electricity to the U.S. grid.

Perry’s cryptic and somewhat baffling rhetoric Thursday in front of the House Commerce Committee’s Energy Subcommittee came during tough questioning by members worried the proposal, if accepted by federal regulators, would undermine electricity markets throughout the country. In particular, the proposed rule by the U.S. Energy Department calls for subsidies for power plants that keep at least 90 days’ worth of fuel stored on site. Such a rule would act as a subsidy for coal and nuclear interests over natural gas, solar, wind and other renewable energy providers, and could cost consumers up to $4 billion a year, according to analysts.

“I think you take costs in to account, but what’s the cost of freedom? What does it cost to build a system to keep America free? I’m not sure I want to put that straight out on the free market and build the cheapest delivery system here,” Perry retorted in response to a question from Rep. Paul Tonko, D-N.Y., about the potential for higher energy prices for consumers. “I think the cost-effective argument on this is secondary to whether or not the lights are going to come on.”

The DOE on Sept. 28 asked the Federal Energy Regulatory Commission (FERC) to consider new rules ensuring nuclear and coal-fired power plants are paid not just for the electricity they provide consumers, but the reliability they may provide to the electric grid. Former FERC commissioners have said such a rule could “blow up” wholesale electricity markets that have taken decades to design. Both coal and nuclear plant operators, meanwhile, have been shuttering inefficient plants over the past several years due to inexpensive natural gas-fired generation and government support for renewable generation.

It is true that fuel security is an important issue to evaluate, as long as it is evaluated objectively. Perry’s “Braveheart” moment regarding energy security suggests a certain irrationality that can only hurt electricity market operations and which, over time, would undermine fuel security as poor economic incentives become institutionalized.

The truth is that free and unfettered price discovery in electricity markets is the most important element in grid resiliency. Perry is involved in a subterfuge, a deception that even someone of his legitimate political skills has trouble pulling off. The administration is in the position of being forced to come up with creative ways to fulfill promises made directly by President Donald Trump to coal mine owners during the election campaign, even at the cost of free markets – a supposed core belief among Republicans and conservatives of all strips.

This intellectual inconstancy is even more acute when one considers that Perry spent much of his 14 years as Texas governor praising and promoting the virtues of freer energy markets in the Lone State State. Texas has the freest electricity marketplace in the country and hasn’t faced any major reliability problems, even in the aftermath of major flooding by Hurricane Harvey in late August. (Of course, it should be noted that most of Texas would be exempt from the DOE’s proposed rule because it maintains its own intrastate grid.)

Fortunately, efforts like this often come up against checks and balances that keep poor policies from being enacted. In response to the DOE proposal, a hitherto unprecedented coalition of 11 energy lobbying groups is asking FERC for a delay in processing the new rule so they can prepare arguments against it. The coalition included a combination of major oil and gas associations and the most important renewable-energy lobbyists, such as the American Wind Energy Association and the American Petroleum Institute.

Because FERC is an independent regulator, the administration can’t force the policy through by fiat. Final rules must be passed by a majority of FERC commissioners and the commission only recently received a quorum, after spending more than six months inactive. The likely postponement of quick action on the DOE proposal will allow the five FERC commissioners (two who of whom still await confirmation) time to consider the full ramifications of such a rule. If the $4 billion annual cost estimate is even close to accurate, the commission’s definition of what counts as “free” may be very different from Perry’s.


Image by Andrew Cline

Remediation won’t cut it – we need cyber resilience

shutterstock_712440448

Since its cybersecurity kerfuffle in June, Equifax has become a four-letter word. And that word is “hack.”

CEO Richard Smith went to Washington this past week to testify in front of four different congressional committees about the perilous pairing of human and technological error that led to 2017’s largest data breach. Unrelenting members of Congress demanded regulation and remediation for consumers.

The hearing by the House Energy and Commerce Committee’s Digital Commerce and Consumer Protection Subcommittee focused attention on Equifax’s plan to remedy consumer confusion. The fact that Equifax is both a broker of identity information and a company that sells services to protect that information makes the aftermath of the hack particularly tricky to navigate.

More than 44 percent of Americans had a treasure trove of personal information stolen in the hack by criminal actors yet to be identified. The data include names, birthdates, Social Security numbers, addresses, driver’s license information and credit information. Equifax added 2.5 million more to the 145.5 million total number of consumers affected by the data breach after cybersecurity firm Mandiant concluded its forensic investigation this week.

The news has prompted members of Congress to renew calls for legislation requiring companies to do more about cybersecurity. However, such approaches targets the symptoms rather than the disease.

Rep. Jan Schakowsky, D-Ill., is sponsor of the recently reintroduced Secure and Protect Americans’ Data Act, which would require any organization or company that holds personal information to develop a written security policy, implement extensive security procedures and assess their security program annually. In the event of a data breach, organizations would be compelled to notify consumers. The requirements set out in the Schakowsky legislation for “information brokers” are even more burdensome. The bill cedes power to the Federal Trade Commission to enforce noncompliance with these rules as an “unfair and deceptive act.”

While the bill is well-meaning, in practice, this regulation likely would result in more work, rather than more security, as organizations redirect resources to compliance.

Meanwhile, Rep. Ben Ray Luján, D-N.M., has proposed the Free Credit Freeze Act, which would require consumer reporting agencies to provide credit-freezing services free of charge in perpetuity. Equifax already has announced that it will be providing such a service, known as TrustedID Premier.

Both the Schakowsky and Lujan bills are emblematic of a shortsighted approach of overemphasizing response, remediation and resistance over a long-term resilience-based approach to cybersecurity. Breach notification, security policies and credit-monitoring services may cure the headache but they will fall short of preventing the next big hack. In contrast, pursuing resilience means that the cybersecurity ecosystem can withstand stressors, adjust to adverse events and bounce back quickly. Government should focus on fostering a policy environment in which these capabilities are strengthened.

Building immunity from the bottom-up requires a layered approach that focuses on the incentives that face both the attacker and the defender, much like the layers of defense in a secure internet-enabled system. Overlapping efforts from a variety of actors—who must include industry, individuals, third parties and government—is the only way to provide a systemwide solution to what is a systemic problem.

Consumer awareness is one way to affect change in the cybersecurity ecosystem. The Promoting Good Cyber Hygiene Act—sponsored by Rep. Anne Eshoo, D-Calif.—identifies one area where government can play a positive role. It suggests the National Institute of Standards and Technology produce an accessible list of best practices, based on NIST’s cybersecurity framework that currently is in use by both companies and the government.

Creating guidelines for individuals takes this framework one step further and empowers consumers to improve their resilience to cyberattacks. Such guidelines would include information about what to do in the event of a data breach. They would allow consumers to better navigate Equifax’s bungled consumer-notification process and misleading landing page. Industry leaders such as Google, Facebook or Apple as well as third-party organizations like the Electronic Frontier Foundation or the Internet Society can also work to fill this information gap for consumers.

In a world in which a majority of Americans have personally been the victim of a major data breach, an approach that focuses on resilience can do more than merely treat the symptoms.


Image by Shawn Hill

 

If the rules are right, digital microlending could play role in subprime market

shutterstock_635517380

Well-functioning credit markets are essential tools for many people in times of personal economic instability or emergency. Unfortunately, some prospective borrowers with subpar credit ratings and credit histories do not qualify for the standard options of credit cards, secured loans or personal loans.

Credit unions frequently are the best available choice for those who have difficulty obtaining credit through traditional banks. But for some, digitally coordinated peer-to-peer lending agreements—inspired by microfinance arrangements for economically fragile communities internationally—also are proving to be an emerging option.

However, before these kinds of lending arrangements can be expected to expand domestically, digital rules will need to be established to give certainty to lenders and borrowers alike.

Subprime borrowers may have practiced poor financial habits or failed to meet their obligations, but this does not change their need for emergency credit when things get tight. Locked out of the prime credit market, these borrowers resort to payday loans, title loans and other products that come with very high interest rates and dubious collection methods. If they default on these loans, the interest and fees skyrocket, leaving them even worse off than before they took the loan. Most lenders must charge these high rates to compensate for the enormous risk they have undertaken to underwrite the loans.

Peer-to-peer digital microlending has the potential to fill a portion of the gap by providing this cohort with small, short-term loans that typically range from $100 to $500. While traditional peer-to-peer lending sites such as Lending Club target prime borrowers, other platforms are helping subprime borrowers.

One of the largest such peer-to-peer digital microlending platforms is the “R/ Borrow” section of reddit.com. This subreddit uses the reputational ecosystem within reddit to identify worthy borrowers, banning users who default or violate the terms of use. The subreddit facilitates the microloans and acts as a central database of transactions, coordinating more than $780,000 in loans in 2015.

If it can be properly scaled, peer-to-peer digital microlending could be a worthy option over payday loans for subprime borrowers. Unlike the latter method, digital borrowers are not necessarily assessed hefty fines or fees for late payments. Instead, they negotiate directly with lenders to find an amicable solution. True enough, some borrowers will default on their commitments and walk away without harm to their credit scores. To compensate, most lenders on microlending platforms (including the “R/Borrow” subreddit) charge high interest rates, ranging from 10 to 25 percent over several weeks or months. This isn’t a problem for most borrowers, as most of their needs are for short-term, small amounts to get them through until their next source of income.

Barriers to the expansion of these platforms come in the form of the myriad usury laws on the books in most states. While banks and other financial institutions are exempt from such laws, individual lenders are not. Digital microlending transactions often happen over state lines, making it very difficult for lenders and potential borrowers to determine their proper jurisdiction and the interest rate restrictions that apply to them. This may be an opportunity for Congress to pre-empt such laws as a matter of interstate commerce. Legislation could provide a consistent standard for digital microlenders to follow, such as through the proposed Uniform Electronic Transactions Act (UETA).

While admittedly there are other challenges to overcome, such as developing a scalable peer-to-peer enforcement mechanism, additional legal certainty would help expand this credit option for borrowers who find themselves locked out of traditional credit markets.


Image by designer491

The 1986 tax reform effort shows that Republicans have a tough road ahead

shutterstock_614260049

If there is one thing that unites Republicans as different as President Donald Trump, House Speaker Paul Ryan, R-Wis., and Sen. Susan Collins, R-Maine, it is a general sense that taxes should be lower than their current levels. For all the party’s changes on social issues and foreign policy over the years, tax cuts have consistently been the GOP’s guiding light.

But we are about to see that consensus tested though. Trump and the Big Six—administration and congressional leaders on tax policy—have proposed a tax reform bill on a scale that has not been seriously considered since Ronald Reagan and a divided Congress pushed through the Tax Reform Act of 1986.

With the endorsement of the Big Six and the likely passage of a budget resolution with reconciliation instructions that will allow 50 senators and Vice President Mike Pence to advance the bill, the Republican tax plan is developing a sense of inevitability around it. After bungling the long-promised Affordable Care Act repeal, a losing effort on tax reform would seriously harm congressional Republicans’ credibility with the party faithful and may even trigger a revolt. If there was ever a time for Ryan and Senate Majority Leader Mitch McConnell, R-Ky., to get their caucuses together, it is now.

Despite the clear political incentives for Trump and congressional Republicans to deliver, the bill that the president introduced faces trouble ahead. While it’s likely that Republicans will pass some bill that affects taxation, a major tax reform bill is a longshot. Reforming the tax code usually means decreasing tax expenditures and closing loopholes in individual and corporate taxes, while lowering rates. Unfortunately for congressional Republicans, it is only the lower rates they agree on.

Once the thorny issue of loopholes comes up, members will find it difficult to come to a consensus. In fact, the 1986 Reagan bill is essentially the only time Congress has ever been able to enact a loophole-closing and rate-lowering tax-reform measure. As recounted in Showdown at Gucci Gulch (Vintage, 1988), the 1986 bill was nothing short of a miracle. What started out as an “ideal tax plan” from then-Treasury Secretary Don Regan was morphed by politics from the Reagan administration, the Democratic-controlled House and the Republican-controlled Senate until it eventually limped across the finish line and became law.

The final product established a two-rate structure for individuals, a 34 percent rate for corporations and repealed individual deductions for state and local sales taxes and corporate tax breaks like the investment tax credit. However, as Gulch authors Jeffrey Birnbaum and Alan Murray note, the bill was a hodgepodge and groups with clout, like the oil and gas industry, beat out those with less influence to keep the loopholes that mattered to them. Furthermore, the two-rate structure was a sham, as the bill included a surtax or “phantom rate” that was applied to top earners.

Still, it did end many loopholes and helped ensure that companies and the wealthy couldn’t avoid their tax bills altogether. Moreover, many members bucked lobbyists and parochial interests from their districts to support a bill that was in the general interest.

Political scientists like David Mayhew have found that the general interest is usually not what sways members of Congress to support bills. According to tax scholars with similarly pessimistic views on the incentives for legislative action, the 1986 bill was an anomaly and tax policy will usually be made incrementally rather than in sweeping changes.

In her essay on tax reform in The Evolving Congress (Congressional Research Service, 2014), Jane Gravelle lists the conditions necessary for a reform bill to pass. The first is strong presidential leadership, which ideally should come from a popular president. Reagan was extremely popular in 1985 and 1986, having just been re-elected in a 49-state landslide (it wasn’t until after the tax-reform effort that the Iran-Contra scandal reared its head and his numbers began to slide). Although Reagan was not very immersed in the details of the plan, he did provide mostly consistent public support and gave tax reform major billing in his 1985 State of the Union.

The second condition is that the first draft should be free from political pressures. This allows the draft to set the agenda and to use popular provisions like the state and local tax deductions as bargaining chips to garner support. Don Regan led the Treasury Department in drafting the “ideal tax plan” and his successor as Treasury secretary, James Baker, also put together draft legislation that was mostly free from political pressure.

The third condition is that the plan must be large and sweeping enough that it looks like “real reform.” This gives members an incentive to support it, as they do not want to be seen as beholden to special interests. The 1986 bill certainly did this, especially once Senate Finance Chairman Bob Packwood, R-Ore., introduced the radical two-rate structure that showed senators who had been more focused on preserving their slice of the pie that reform was serious business.

The 1986 bill also benefited from the fact that control of Congress was split between Democrats and Republicans, and the parties were not yet so polarized that they could not work together. A particularly strong alliance formed between relatively liberal tax reformers like Sen. Bill Bradley, D-N.J., and Republican adherents to supply-side economics, who believed that lower rates must be achieved at any cost, even eliminating popular tax breaks. The fact that both parties had an interest in seeing the bill passed encouraged its shepherds to face the wrath of special interests in unison, rather than try to score political points by blaming the other side. It also showed that, despite relatively weak public support for tax reform (which persists today), members do not want to oppose a bill that pings special interests in favor of the everyday taxpayer.

So, keeping in mind the lessons of 1986, what should we expect in 2017 or 2018? The good news for Republicans is that tax reform was on the agenda during the Obama presidency and, thus, has received some attention from political elites. During his second term, Obama wanted to work on reforming corporate loopholes while Ways and Means Chairman Dave Camp, R-Mich., was interested in dropping the top rate into the 28 to 25 percent range by eliminating a large swath of individual loopholes. However, the two proposals mostly stayed in their respective partisan enclaves and never gained traction.

The troubles of tax reform in the Obama years shows one of the key weaknesses for the Trump plan: a lack of bipartisan consensus on what to do. Right now, Republicans are focused on giving corporations a tax cut. That is not in the interest of congressional Democrats, which means the GOP must use the reconciliation process to pass a bill on partisan lines. The last time this happened was early in the George W. Bush administration, an effort that focused more on tax-rate reductions than tax-code reforms.

So, if the partisan roadblock can be bypassed with reconciliation, what about the other 1986 conditions?

President Trump is historically unpopular at this point in his term, so that likely won’t boost the chances of reform passing. The effort might be hurt by the optics of an extremely wealthy president is trying to pass a bill that could give him or members of his family tax cuts. The worry that the current proposal is too favorable to the rich already has Republicans talking about keeping the top rate above 39 percent. Trump’s unpopularity could feed into the already lackluster support for tax reform. This is not necessarily detrimental (remember, it was not popular in 1986 either), but having public opinion firmly behind a legislative initiative is never a bad thing.

The second condition for passing the bill is that it be drafted away from political pressures. The contents of the Trump bill are still somewhat unknown, as the Republicans have only released a framework, which is specific in some areas and lacking detail in others. So, we cannot make a judgment on the second condition quite yet. One of the most difficult political sells for Republicans will be the elimination or limitation of the deduction for state and local taxes. These are particularly important for Republicans from high-tax states like New York and California.

Their appeal goes beyond that though. When working on the 1986 bill, a New York coalition to preserve the state and local incentives teamed up with oil and gas interests from Texas to change the bill when it was going through the House. The coalition gained the support of other members because, as it turned out, the state and local incentives had widespread support, even in low-tax states. The coalition received 208 pledges from members who said they would not vote for a bill that eliminated the deduction.

If widespread opposition like this emerges to provisions in the general framework or in the actual bill, it could spell doom for reform. One of the difficulties for tax reform that Gravelle mentioned was that the 1986 bill eliminated most of the low-hanging fruit for loopholes. The ones that remain are popular and will probably have fierce advocates organizing opposition against their repeal.

The third condition is that the bill is sufficiently wide-ranging and appears to be “real reform” instead of a thinly veiled effort to benefit some narrow constituency. This will force members to either vote for it or incur the wrath of the average taxpayer. As of now, it does not appear that the Trump bill has that quality. For one thing, Republicans have very different ideas about what they want out of the bill.

Sen. Bob Corker, R-Tenn., who is retiring after the 2018 elections, has said that he will not support any bill that increases the deficit, a tough sell when initial estimates show the Trump proposal losing trillions. Another recalcitrant Republican is Kentucky Sen. Rand Paul, who claims the Trump plan does not help middle-class voters enough. He showed on the Graham-Cassidy rendition of the ACA repeal that he is not afraid to buck his party, even when it comes to longstanding goals or core principles like tax reform. Arizona Sen. John McCain is demanding that the bill go through regular order and might even lean toward a bipartisan package rather than the 50-vote deal that Republican leaders appear to be eyeing. He opposed the Bush tax cuts in the early 2000s, so his vote is by no means guaranteed. We have not heard much from moderates like Collins and Lisa Murkowski, R-Alaska, both of whom come from poorer states and might not be keen on a bill that favors the rich.

Whatever these individual senators might be thinking, the bill has clearly not yet reached the point of inevitability that the 1986 bill did when Packwood released his two-rate structure.

If history is any guide, the Trump tax reform plan has rough sailing ahead. It seems more likely that Republicans use the reconciliation process to enact tax cuts without targeting many of the deductions or corporate loopholes that could offset some of the revenue losses.

If anything, 1986 showed us what a herculean effort it is to overhaul the tax code. It’s not impossible, but Republicans will probably need to give more thought to selling the effort to skeptical members and the public before they are able to pass the most sweeping changes the tax code has seen in over 30 years. It might take two years (or two terms and a few more seats in the Senate) before President Trump is able to achieve anything like what Ronald Reagan and the 99th Congress did.


Image by EtiAmmos

Where R Street stands on birth control issues

shutterstock_174193232

The following post was co-authored by R Street Policy Analyst Caroline Kitchens.


After R Street Policy Analyst Caroline Kitchens, who co-wrote this post, wrote about birth control access in August for The Hill, more than a few allies asked us other questions about what we think and where we stand on some related issues. We’re writing this post to clarify what we as an institution think and deal with—and what we don’t deal with—on birth control and related topics.

To put it simply: We think current rules regarding access to many forms of birth control are an example of government overregulation. As such, R Street wants to change them both for their own sake and because it will advance our overall deregulatory agenda. We don’t, however, take institutional positions on related issues, such as health care and abortion.

With regard to birth control, the current regulatory regime is deeply unjust and imposes needless burdens on the vast majority of sexually active Americans. Even though the decision to use birth control (or not) is one of the most private parts of life, access to all hormonal birth control requires a time-consuming, intrusive and often expensive doctor’s office visit. This happens even though consumers are able to self-diagnose the need for the medication (in this case, wanting to avoid unintended pregnancy) and the drugs carry no risk of overdose or addiction. While some risks do exist in hormonal birth control, there is no reason why pharmacists should not be able to deal with those risks on the basis of questionnaires or minor screenings.

OBGYNs and pharmacists themselves support this. Nearly all American pharmacists already can write prescriptions for many types of vaccinations. There’s no reason why they shouldn’t be able to do what they already can in eight states and write them for birth control pills as well. In the short term, we’d like to expand pharmacist scope-of-practice to include other hormonal birth control—including the injection, patch and vaginal ring—and look for ways to allow other professionals who are not doctors to write prescriptions for the same.

To those who might suspect that we’re doing this to advance a broader libertarian and deregulatory agenda rather than simply working to expand access to birth control itself…your suspicions are justified. Our ongoing and expanding work on professional regulation convinces us that this might be a good way to get more people talking about different ways that people should be able to make a living without government approval and to draw attention to a particularly egregious and harmful example of regulatory overreach. If this helps spark a conversation that eventually makes it easier for cosmetologists to practice their craft after having learned basic health precautions rather than having attended pointless and expensive classes, we’ll be delighted.

With all of that said, we don’t see why this agenda with regard to birth control and professional regulation should obligate us to take positions on related issues. Besides a few scattered comments on very narrow reinsurance topics, we’ve been silent on those pieces of health legislation that have come before Congress since we opened our doors a little over five years ago.

Insofar as there is to be a system that specifies a mandatory benefits package and requires zero co-pay preventative care, we have no objection to the inclusion of birth control in that package and think it is probably a good idea. Since we are not advocating that birth control be made truly “over the counter,” we do think it should be covered by insurance plans on, at minimum, the same basis as any other similar prescription, even if it doesn’t require a doctor’s office visit.

The broader questions of what the health-care system should be able to look like and how (and if) employers and individuals might shape benefits packages based on personal or religious preferences are outside of R Street’s expertise. In the long term, we might pursue health care as an issue area. But we’re not going to wade into a debate that’s this complicated and consequential without deep expertise on the topic. And we don’t have that right now.

While we might eventually work on health care, R Street will never have an institutional position on abortion, per se, or any other issue that defies a solution that’s primarily economic. In the case of abortion, this is partly a matter of comparative advantage: there are dedicated, sincere, hardworking, well-funded and committed groups on both sides of the debate over the termination of pregnancies. Starting a program devoted to the issue at R Street would not add anything.

Just as importantly, we’re a pragmatic think tank that looks for innovative, market-oriented solutions to problems. The important political debate over abortion, as it involves profound questions of individual autonomy and human life, may not be suited to a market-oriented solution. Trying to point out the advantages of “the market” would not and probably should not convince anybody to change his or her opinions, anyway.

In short, R Street favors faster, better, cheaper access to birth control and doesn’t think getting it should require a doctor’s office visit. We don’t see a need to wade into other related issues to do this and, for the time being, we won’t.


Image by Image Point Fr

Why is Richard Cordray voting on FSOC?

la-fi-cfwb-ocwen-20170420

The Financial Stability Oversight Council (FSOC) just made the good decision to remove the designation of the insurance company American International Group as a “SIFI” or “systemically important financial institution.” This was a good idea, because the notion that regulators meeting as a committee should have the discretion to expand their own power and jurisdiction was a bad idea in the first place – one of the numerous bad ideas in the Dodd-Frank Act. The new administration is moving in a sensible direction here.

The FSOC’s vote was 6-3. All three opposed votes were from holdovers from the previous Obama administration. No surprise.

One of these opposed votes was from Richard Cordray, the director of the Consumer Financial Protection Bureau (CFPB). Wait a minute! What is Richard Cordray doing voting on a matter of assessing systemic financial risk? Neither he nor the agency he heads has any expertise or any responsibility or any authority at all on this issue. Why is he even there?

Of course, Dodd-Frank, trying to make the CFPB important as well as outside of budgetary control, made him a member of FSOC. But with what defensible rationale? Suppose it be argued that the CFPB should be able to learn from the discussions at FSOC. If so, its director should be listening and by no means voting.

Mr. Cordray, and any future director of the CFPB attending an FSOC meeting, should have the good grace to abstain from votes while there.

And when in the course of Washington events, the Congress gets around to reforming Dodd-Frank, it should remove the director of the CFPB from FSOC, assuming both continue to exist, and from the board of the Federal Deposit Insurance Corp. while it is at it, on the same logic.

New LegBranch.com Resource: How wealthy are our representatives?

shutterstock_420830281

The following blog post was co-authored by Charles Hunt, a doctoral student at the University of Maryland at College Park.


It likely wouldn’t surprise anyone, much less a congressional scholar, to learn that most members of Congress are wealthier than the average American. What might be more surprising is just how much wealthier they are.

According to estimates calculated by the Center for Responsive Politics, the average net worth of a member of the U.S. House is around $8 million. That’s about 116 times as much as the net worth of the average American, which according to the U.S. Census Bureau’s most recent estimate is $69,000.

Even the median net worth of the top 20 percent of Americans, about $630,000, doesn’t come close to the median net worth of a member of Congress (about $880,000).

To help visualize this for individual members of Congress, we’ve put together an interactive graphic that displays data for all members of the 114th House of Representatives for whom data are available, including their state, district, party and estimated net worth. Each circle denotes a representative – the bigger the circle, the more the he or she is worth. Hover over each circle to see the name of the representative and their net worth and zoom in and out to get a better view of a region. Use the wealth slider to limit the representative visible on the map.

Below is a summary graphic showing wealth ranges and the number of members that fall into each range. Nearly half the members of the House are millionaires, and nearly two-thirds are worth more than $500,000.

worthbar

These net worth figures for members of Congress, given how out-of-sync they are with the wealth of their constituents, should give us pause and lead us to ask some important questions.

  • Why are members of Congress so much wealthier than average Americans?
  • Do voters care about this disparity, and should they?
  • What work experience created this kind of wealth for them, and what kinds of policy implications could this have?

Further analysis of our interactive graphic is likely to spur even more questions.

richest poorest


Image by DenisProduction.com

 

PRI Podcast: Steven Greenhut’s end-of-session wrap

R Street Western Region Director Steven Greenhut joins Pacific Research Institute’s Another Round podcast to discuss the California Legislature’s housing package, its recent cap-and-trade deal, bills that were overlooked in the recent legislative session and the impact of Proposition 54.

Rep. Tonko cites R Street’s energy research

Rep. Paul Tonko, D-N.Y., cites R Street research on electric reliability at an Oct. 3, 2017 hearing of the U.S. House Committee on Energy and Commerce’s Energy Subcommittee.

Prescribing on-site fuel storage is an unreasonable approach to grid resiliency

shutterstock_311444093

The U.S. Energy Department’s proposed rulemaking to the Federal Energy Regulatory Commission (FERC) is, at best, a myopic and inefficient approach to grid resiliency. The proposal prescribes one measure, among many options, to address a single, low-to-medium salience aspect of grid resiliency. That is, it proposes to compensate extended on-site fuel storage as the means to address the security of fuel supply across the power-plant fleet.

Fuel security is an important issue to evaluate objectively, but it’s been politicized immensely by rent-seeking interests. This clearly influenced the DOE proposal, which cherry-picked the evidence to make its case. The proposal selectively pulled information from the insightful DOE technical report issued in August. The proposal exalts the benefits of fuel-secure nuclear and coal, but ignored that the report highlighted substantial fuel-related outages at coal plants. Many coal plants couldn’t obtain fuel from their own on-site stockpiles because conveyor belts broke and coal piles froze.

Reducing fuel shortages at many power plants is not even a function of whether fuel is stored on-site. The biggest issue with natural gas plants lacking fuel – in the 2014 polar vortex or otherwise – is that they lacked the incentive to firm their fuel supply. Firming a fuel supply could come from on-site storage (e.g., backup oil) as well as off-site delivery, such as contracting for guaranteed pipeline service. Since the polar vortex, market reforms have increased the incentive to firm fuel supplies, and this has improved generator performance during severe weather events. In the PJM Interconnection, the largest grid operator, this largely came in the form of firming gas supplies using third-party marketers, which improves fuel security without increasing on-site supplies. Critically, this came from voluntary actions by the private sector, which creatively chose the lowest-cost ways to improve plant performance that fit their unique set of circumstances.

The DOE proposal cites the polar vortex as a cautionary tale of fuel insecurity. Yet the biggest issue was weather-related outages, as many plants couldn’t operate because temperatures dropped below a plant’s design basis (e.g., external instruments froze). If anything, it’d be more important to ensure weather-secure generation than fuel-secure generation.

Regardless, it shouldn’t be the role of government to compensate plant weatherization, on-site fuel storage or any other measure to possibly improve generator performance directly. Instead, regulators should ensure an incentive structure exists for the economically efficient level of weatherization, fuel assurance improvements and other performance-enhancing measures like improved maintenance. All these measures have costs, and only a well-functioning market should determine which costs are worth incurring to keep the lights on.

A market-based approach to reliability and resiliency values the performance or capability to provide a specific service. It does not explicitly value specific measures associated with performance or capabilities. Dozens of measures improve performance and capability, and the lowest-cost way of doing so is to provide proper incentives to market participation to decide their own course.

In contrast, the DOE proposal would result in compensation for one politically preferred measure. For government to favor a certain measure simply reveals the bias of central planning, which has a track record of raising costs unnecessarily. To the extent that on-site fuel improves generator performance, markets should reward the measure indirectly through a fuel- and technology-neutral paradigm that procures specific services.

Take “black-start” capability, for example. A power plant with black-start ability can start up without power assistance from the grid. This is critical for resilience, as it provides the ability to restore operations in case of a full grid blackout. Procurement of black-start capability predominantly occurs through administrative processes, rather than market mechanisms. Re-examining the determinants of black-start procurement and using a market approach may boost prospects for cost-effective resiliency.

A thoughtful, market-compatible approach to reliability and resiliency, like that recommended in the DOE technical report, is welcome. The current DOE proposal provides an example of what not to do. It is deeply flawed, rushed and anti-competitive. The fallout from the DOE’s proposal will hopefully encourage the administration to reinvent its strategy on resiliency to bolster market performance and empower consumers, rather than undercut them by prescribing actions. In the meantime, FERC must uphold market principles and push forward with an economically sound agenda. FERC only needs to cite the DOE’s technical report as an example of what to do, as it respectfully declines DOE’s political albatross.

 

James Wallner on the Senate filibuster

On the American Enterprise Institute’s Banter podcast,  Peter Hanson and R Street Senior Fellow James Wallner discuss the Senate filibuster, how it operates, its impact on the Republicans’ agenda and ways to overcome it. The full audio is embedded below:

For connected cars, let the best technology win

shutterstock_682503085

Vehicle crashes are the leading cause of death among young people in the United States. It’s therefore crucial that we find ways to improve the safety of our roads is we want to save lives.

However, a proposal currently before the U.S. Transportation Department to mandate that all vehicles use a kind of vehicle-to-vehicle technology known as dedicated short-range radio communications, or DSRC, is the wrong approach to this issue. The mandate would hamper development of competing standards that may work better, in addition to creating potential security vulnerabilities.

Technology-specific mandates are always problematic. As a matter of process, bureaucratic decisionmaking is not well-suited to determine which technology is best for a specific need, or whether a need even exists at all. In the case of DSRC, there are technical reasons why other standards for vehicle-to-vehicle communication may prove more popular.

For instance, standards developed by organizations like 3GPP send signals over lower-band spectrum, which travel further and can penetrate obstacles like buildings or trees better than the high-band spectrum allocated for DSRC. These characteristics likely mean the lower-band spectrum options will be cheaper to deploy than DSRC, since the same area can be covered with fewer antennas. This standard already has broad support from tech companies and carmakers like Ford, Rolls-Royce, Audi and BMW. Mandating the use of DSRC, or any specific technology, would be unwise when the market already provides competitive alternatives.

The DSRC mandate also raises security concerns. As security researcher Alex Kreilein notes, adding an interface with computers in other vehicles may improve safety on some counts, but it also creates new vulnerabilities. He argues the DSRC mandate would be especially risky in that it would create a monoculture in which all vehicles use the same technology. Compromising one car could, in fact, compromise all of them.

Kreilein further explains that it is dangerous to concentrate essential safety technology in one identifiable spectrum channel, where it can be more easily targeted by bad actors. We should allow the marketplace to consider and ultimately adopt competing standards using a variety of spectrum bands, rather than forcing all our eggs into the DSRC basket.

Some developers of self-driving vehicle systems are avoiding the security issues associated with vehicle-to-vehicle communications entirely by designing their products to account for their surroundings without directly communicating with other vehicles. These systems use technologies like cameras, LIDAR, radar and sonar to achieve similar awareness of situations, without the additional complications. In the case of these vehicles, any mandate would add unnecessary costs and security vulnerabilities, which would result in higher prices and less safety for consumers.

Spectrum for DSRC has been set aside since 1999, with almost nothing to show for it. Spectrum is a scarce resource and letting it remain underutilized has significant opportunity costs. The particular band allocated to DSRC (5.9 GHz) is adjacent to spectrum currently used for Wi-Fi. With demand for wireless bandwidth, including Wi-Fi, on the rise, the Federal Communications Commission could extend the available bandwidth for Wi-Fi to encompass the spectrum currently set aside for DSRC. While the FCC has been exploring ways to share this band between DSRC and Wi-Fi, we could maximize consumer benefits by abandoning the DSRC mandate and allowing the market to dictate how the spectrum should be used.

Thankfully, the DOT appears to be backing off the proposed mandate, moving it to the less urgent status of “undetermined.” The department should close the proceeding completely to create a level playing field that will allow the best technology to win and allocate spectrum to its most valuable uses.


Image by Zapp2Photo

National Flood Insurance Program, zoning, hurricanes: Lessons for lawmakers

In the wake of devastating storms in Texas, Florida, Puerto Rico and the U.S. Virgin Islands, the deeply indebted National Flood Insurance Program almost certainly will be forced to ask Congress to borrow even more money. Senior Fellow R.J. Lehmann took part Sept. 25 in a Capitol Hill discussion hosted by the Cato Institute to discuss ways the program could be reformed — and perhaps, eventually, completely privatized — ahead of its scheduled Dec. 8 expiration.

Full video of the panel is embedded below:

DOE proposal misframes grid resiliency  

shutterstock_79533418

U.S. Energy Secretary Rick Perry directed the Federal Energy Regulatory Commission (FERC) Friday to issue a rule to provide immediate cost recovery for power plants with extended on-site fuel supply. Read another way, the proposal is an arbitrary backdoor subsidy to coal and nuclear plants that risks undermining electrical competition throughout the United States.

The U.S. Energy Department proposal leverages a rarely used law that allows them to propose their own rulemaking to FERC. The DOE proposal, which is notable for its lack of detail, nevertheless calls for FERC to create a new final rule within 60 days. While DOE has the legal authority to initiate proposed rulemakings, FERC retains ultimate discretion as to how to respond.

DOE’s proposal marks a deeply troubling departure from the thoughtful recommendations in its August technical grid report. That report sought to enhance the performance of electricity markets, whereas this overtly political proposal inflicts an impossible timeframe and concocts a recipe for wounding competitive markets, while potentially imposing billions of dollars in unnecessary costs for consumers.

Proponents of markets, consumer choice and limited government should shudder. Consumers would ultimately bear a hefty and unnecessary bill from any such draconian intervention, which would also raise capital borrowing costs and have a chilling effect on new investment. Proponents of good governance should also cringe, as the proposal calls for an unnecessarily rushed response in a timeframe completely unrealistic to enact reforms through the proper channels. To craft and implement sophisticated market rules requires working through a robust development process, often over the course of two years or more. The 60-day timeframe called for in the proposal is unprecedented.

When it came to the DOE’s technical report, a solid effort by the department’s technical team muted external suspicion of pro-coal and nuclear bias. This DOE proposal instead validates that suspicion. It is neither technically nor procedurally sound and has political fingerprints all over it. Clearly, the thinking behind the proposal bypassed that of the department’s own technical experts. The political proposal does a disservice to prior DOE work, to consumers, to good governance and to competitive markets.

The DOE proposal is long on hyperbole and short on technical backing. It seeks “immediate action” to address the “crisis at hand” as the “loss of fuel-secure generation must be stopped.” Yet there is no crisis, as affirmed by recent electric performance metrics, the latest congressional testimony of the CEO of the North American Electric Reliability Corp. and even the DOE’s own technical report. Critically, motivations for market reforms should never aim to adjust compensation with a pre-determined result. The whole purpose of markets is to let competitive forces determine resource allocations, which lowers costs and allocates risk to the private sector, in contrast to government-determined investments.

Market failures for electric reliability and resilience justify a limited role for government intervention to facilitate competition. Experts traditionally considered grid reliability and resiliency as “common goods,” because suppliers cannot limit receipt of the product to those who pay for it. This will induce free ridership and cause chronic underinvestment. Thus, the fundamental issue is ensuring incentive compatibility, where market rules align the economic interests of participants with the efficient and reliable performance of the electric system.

Getting the incentives right begins with ensuring prices accurately reflect supply-demand fundamentals and that there are markets for discrete reliability and resiliency services. The DOE technical report hit this on the head, calling for improvements in energy price formation and valuation of essential reliability services (e.g., voltage support and frequency response), which does not include on-site fuel storage. An exercise that defines discrete products for reliability and resiliency to procure through fuel- and technology-neutral markets is fruitful. The DOE proposal does not call for that.

The proposal is incompatible with sound market economics. It actually promotes a gateway to expand cost-of-service regulation, where government substitutes for competition. Its definition of eligible units – those with a 90-day on-site fuel supply – is arbitrary and has no economic basis. Curiously, some coal plants wouldn’t even qualify. Some hold roughly 30 days of on-site fuel supply; however, many hold 70- to 100-day supplies.

With a splash of hyperbole, the proposal referred to the loss of “fuel secure” resources during the 2014 polar vortex as possibly “catastrophic,” by inaccurately citing the technical report. This doesn’t characterize the nature of temporary bulk power shortages correctly. When bulk demand exceeds supply, grid operators take emergency actions, the most severe being voltage reductions (brownouts) and rotating blackouts. Brief voltage reductions and even rotating 30-minute blackouts are not catastrophic, by any stretch. This is why economic studies reveal consumers would often rather have their power curtailed briefly than pay a hefty premium to keep the lights on.

Prolonged (multiday) power outages can be catastrophic, especially during severe weather. The predominant cause of these sustained outages is damage to transmission and distribution infrastructure – take the recent hurricanes, as an example. They rarely result from power plant outages, let alone those from lack of fuel. DOE’s proposal seeks to take emergency action on, at best, a low-to-medium level resiliency issue.

A resiliency initiative should prioritize mitigating transmission and distribution damage and accelerating restoration. The DOE technical report recommended that grid-resiliency efforts prioritize disaster-preparedness exercises and for NERC and grid operators to define resilience criteria and examine resilience impacts. That’s a thoughtful approach, and the exact opposite of the unrefined DOE proposal with a single DOE-determined criterion for resilience.

A thoughtful resiliency approach would take a market-compatible mindset and recognize that advances in technology have helped enable a degree of product differentiation, where consumers can pay for different levels of reliability and resiliency services. This creates the ability to cease treating aspects of reliability and resiliency as a “common good,” where a central authority substitutes their judgment on behalf of consumers. This prospect to “privatize the commons” creates a great opportunity for the Trump administration to reduce the role of government planning, not to deepen government’s dictation of private services.


Image by Christopher Halloran

Dear Senate, we want more Pai

shutterstock_253261441

The following blog post was co-authored by R Street Tech Policy Analyst Joe Kane.


The U.S. Senate will have a chance Monday to reconfirm Ajit Varadaraj Pai for another term as chairman of the Federal Communications Commission, but it will first have to move past some baseless accusations about his suitability for the post that have been hurled the chairman’s way by a few congressional Democrats and political groups who want to block his reconfirmation.

In fact, Pai is arguably the most well-qualified chairman the FCC has had in recent years. Arguments to the contrary amount to a smokescreen for underlying disagreements with the market-oriented policy decisions Pai and his fellow commissioners have been pursuing at the FCC. These arguments should be rejected. We want more Pai.

1

Hailing from Parsons, Kansas, Pai attended Harvard University and the University of Chicago Law School before embarking on his illustrious legal career. Pai’s experience includes a federal judicial clerkship in Louisiana, multiple stints at the U.S. Justice Department and the Senate Judiciary Committee, and several years in private practice, first as associate general counsel for Verizon and then as a partner at the law firm Jenner & Block. Pai first joined the FCC in the General Counsel’s Office in 2007 before being nominated by President Barack Obama to be a commissioner in 2011. In 2012, he was confirmed by Democratic-controlled Senate by unanimous voice vote.

During his time as commissioner, Pai consistently pursued market-oriented policies and opposed expansive, heavy-handed regulation. It therefore should be no surprise that he has worked to implement these same policies as chairman. Additionally, Pai has prioritized closing the “digital divide,” incorporating rigorous cost-benefit analysis into agency rulemakings and implementing unprecedented transparency reforms, like publishing all pending orders on the FCC’s website three weeks prior to a vote. Pai’s actions prove he is an able public servant truly dedicated to pro-consumer policies.

2

Nonetheless, political opponents and activist groups are staunchly opposed to Pai’s FCC agenda. These groups have launched an all-out assault against the reconfirmation vote, forcing Senate Republicans to invoke cloture to even get a vote on Pai, which is scheduled for next Monday. The same senators who thought Pai was well-qualified when nominated as a commissioner should take the same view now.

While Senate Democrats may disagree with the policies Pai and his fellow Republican commissioners are advancing at the FCC, blocking a qualified public servant from office is not the proper response. Telecom policy is hugely important to all Americans, so it shouldn’t be relegated to bureaucratic rulemakings and squabbles over nominations. Ongoing debates over closing the digital divide and protecting net neutrality are vitally important. We need our leaders in Congress to pursue bipartisan legislation to settle these debates, not hold the current FCC Chairman hostage.

3

Cheesy dance moves aside, he is the best man for the job.


Image by Mark Van Scyoc

Things are getting weird in pipeline country

shutterstock_590235038

In an environment that only a lawyer looking for billable hours could love, federal courts are making a mess of executive branch guidance concerning whether federal agencies need to consider “indirect” climate effects when regulating pipeline construction.

The Obama administration in August 2016 finalized guidance on how agencies should consider climate change in project reviews. The guidance said federal agencies must consider the larger impact of greenhouse-gas emissions that occur from energy projects when completing its National Environmental Policy Act (NEPA) analysis.

The decision formalized executive action that President Barack Obama had informally created when he denied construction of the Keystone XL pipeline on climate-change grounds in November 2015. Obama then signed the United States up to substantial cuts in its greenhouse-gas emissions during the Paris Climate Accords in December 2015 and it all made sense.

But that was before Donald Trump came to town. In March, the White House rescinded the Obama guidance via an executive order, and in June, Trump announced the United States would leave the Paris Accord by the end of his first term. For outside observers, this would seem to shut down the possibility of the government taking climate change into consideration until at least another Democratic administration.

But this turns out not to be the case. For the last decade or so, some federal courts have rejected projects that the courts felt hadn’t taken the potential damage of indirect climate emissions into account. This gives plaintiffs the ability to argue to courts that there is legal precedent for blocking permits, even if the executive branch in charge of the permits change hands and reverses the policy. The legal issues have never reached the U.S. Supreme Court for final adjudication.

The political battle over natural gas pipelines is where the sniper fire is hottest right now.

In August, the U.S. Court of Appeals for the Federal Circuit ruled that the Federal Energy Regulatory Commission should have considered the impact of climate change when considering whether to approve a 500-mile natural gas line serving the Southeast. It ordered FERC to redo the analysis.

But FERC, which is responsible for siting all interstate natural gas pipelines, has for years fought against including indirect emissions into its environmental analysis. Now, newly staffed with a majority of Republican commissioners appointed by Trump, FERC doesn’t look to be backing down.

On Sept. 15, FERC overruled New York State’s Department of Environmental Conservation (DEC), which had blocked an eight-mile extension of the Millennium Pipeline in upstate New York under its Clean Water Act authority. New York, which has banned hydraulic fracturing, argued in its rejection letter that FERC had earlier “failed to consider or quantify the indirect effects of downstream [greenhouse gas] emissions in its environmental review of the project.”

While pipeline builders were pleased with the FERC decision, the agency only overruled the state authority on a technicality, arguing that New York waited longer than the 12-month window allowed under statute before rejecting the application.

Two other pipeline companies have said they would seek similar waivers from FERC after being blocked by DEC using the same Clean Water Act authority. Yet it is unclear whether the same procedural violations have taken place, and courts have not supported FERC’s assertion that it shouldn’t take project emissions into account.

This means the Obama administration’s climate guidance is still operating through the U.S. court system, even when the Trump White House has rescinded the guidance.

Again, things have gotten strange regarding pipeline siting in the United States – so much so that only a decision by the U.S. Supreme Court will likely straighten the rules out.


Image by Kodda

Rep. John Ratcliffe on the Separation of Powers Restoration Act

Earlier this year, Rep. John Ratcliffe, R-Texas, introduced the Separation of Powers Restoration Act. Unlike some bills, the act’s title precisely encapsulates its purpose: restoring the power disparity in our system of separated powers.

As close observers of our political system know well, the modern presidency has grown precipitously compared to Congress. While Congress itself deserves much of the blame for this state of affairs by over-delegating its powers to the executive branch, the third branch of our system has also been complicit. Under the judicial doctrine known as “Chevron deference,” the federal judiciary has systematically deferred to executive agencies when it comes to interpreting laws.

As R Street has noted previouslyChevron deference has become increasingly controversial in the legal community:

[Chevron deference means that] unless an agency’s interpretation of a statute is unreasonable, courts must adhere to it. Unsurprisingly, this allows agencies significant leeway to exercise their regulatory powers.

This level of deference to agency interpretations … has become contentious. There continues to be an ongoing debate among judges, legal scholars and practitioners about the propriety of according federal agencies such broad deference.

Rep. Ratcliffe’s bill addresses this issue by calling for an end to such deference; in its place, the bill would require courts to review agency actions de novo (“from the beginning”) and without deference.

LegBranch.com recently spoke with Rep. Ratcliffe about his bill, which he feels would provide an “immediate and profound” step forward in the effort to rein in the executive branch. As Ratcliffe put it, Chevron deference gives agencies the ability to “grade their own paper,” since their interpretation of statutes within their jurisdiction usually prevails in court.

For Ratcliffe, eliminating judicial deference to agency legal interpretations strikes at the very heart of our constitutional framework. “The wisdom of the founding fathers was that there would be a system of checks and balances,” Ratcliffe notes. “This is what Chevron deference has thrown out of balance; it should be the legislature that writes the laws, not agencies.”

Despite the relatively simple nature of his bill—its entire text barely exceeds 150 words—it remains controversial. Ratcliffe notes, however, that a version of the bill passed the House with at least some bipartisan support from several Democrats. According to Ratcliffe, President Donald Trump has also been receptive to the bill, which puts the ball squarely in the Senate’s court.

Given the Senate’s busy calendar, it’s anyone’s guess whether it will take up and pass the Separation of Powers Restoration Act. But those interested in checking the growth of the executive branch will certainly be keeping watch.

Sen. Graham has a good idea on climate change: Here’s how to do it

shutterstock_376878268

Sen. Lindsey Graham certainly likes to be in the middle of things. The South Carolina Republican took time away from Washington D.C., where he had been trying to shepherd passage of a major health care bill, to tell an audience that “a price on carbon – that’s the way to go in my view.”

Graham has been here before. Back in 2010, he was in the thick of negotiations over a national carbon-trading system that broke down when the Senate couldn’t find enough votes. Graham actually called for climate-change legislation during the 2016 election, but had not mentioned a price on carbon explicitly until just last week.

Meanwhile, the Republican Party and its voters have continued to move further away from promoting any climate change solution, even as Graham remains consistent in his belief that CO2 emissions generated by man are warming the earth.

Graham is completely correct that a carbon tax is the best way to control greenhouse gas emissions with as little impact as possible on the national economy. Many economists believe a carbon tax would be a much more efficient and elegant way to encourage cuts in carbon emissions than alternatives like a trading system or command-and-control regulation. Placing a fee on carbon would be more transparent, can be done with fewer transaction costs and would keep Wall Street from gaming a complex, opaque marketplace.

But the details matter. If a carbon tax merely served as a new source of revenue to fund wasteful government spending, it would be of dubious value. Any proposal to institute a carbon tax must not expand the overall size and scope of government, and ideally, should actually shrink it.

To be successful, a carbon tax should be revenue-neutral — that is, the revenue generated by the tax should be paired with cuts to taxes that are even more economically damaging. For example, R Street has proposed eliminating the corporate income tax altogether in combination with a meaningful carbon tax. A number of studies shown that such a trade-off would boost conventional economic growth, in addition to cutting pollution.

Moreover, any carbon-tax plan ought to pre-empt existing regulations of greenhouse gases. Because a carbon tax is layered on top of the retail cost of any fuel, it encompasses the complete externalized cost of a pollutant, meaning there should be no additional costs to companies or consumers.

This means that much, if not all, of the administrative state apparatus created to control hydrocarbon pollution would have to be eliminated as a prelude to carbon pricing. These policies include pre-empting any future regulations of greenhouse gases under the Clean Air Act (CAA). It’s also possible a slew of other regulatory authorities would be on the chopping block, as well.

Sen. Graham doesn’t appear likely to revive that Republican health care bill from dead, but perhaps he could still tempt fate and resurrect a carbon tax.


Image by arindambanerjee

 

Can police predictions create crime?

Technology has the power to make a lot of things better – including police work and crime-fighting. But it also has the power to “create” crime where it didn’t exist before. In a recent interview with the Brian Gongol Show on WHO Newsradio 1040 in Des Moines, Iowa, R Street Justice Policy Director Arthur Rizer explains how predictive policing can help or hurt the very communities that need the work of “peace officers” the most. Full audio of the piece can be found at this link.

Congressional procedure and policymaking

At a recent gathering of the Legislative Branch Capacity Working Group, Molly Reynolds, a fellow at the Brookings Institution, led a discussion on congressional procedures and their impact on policy creation and outcomes. Topics discussed include: how procedures, especially in complication situations like reconciliation, empower leaders versus rank-and-file-members and what should be done to increase staffers’ knowledge of procedures and their consequences.

Full video of the panel is embedded below.

How does the United States rank in homeownership?

shutterstock_652092850

There are a lot of different housing-finance systems in the world, but the U.S. system is unique in being centered on government-sponsored enterprises. These GSEs—Fannie Mae and Freddie Mac—still dominate the system even though they went broke and were bailed out when the great housing bubble they helped inflate then deflated.

They have since 2008 been effectively, though not formally, just part of the government. Adding together Fannie, Freddie and Ginnie Mae, which is explicitly part of the government, the government guarantees $6.1 trillion of mortgage loans, or ­­59 percent of the national total of $10.3 trillion.

On top of Fannie-Freddie-Ginnie, the U.S. government has big credit exposure to mortgages through the Federal Housing Administration, the Federal Home Loan Banks and the Department of Veterans Affairs. All this adds up to a massive commitment of financing, risk and subsidies to promote the goal of homeownership.

But how does the United States fare on an international basis, as measured by rate of homeownership?  Before you look at the next paragraph, interested reader, what would you guess our international ranking on home ownership is?

The answer is that, among 27 advanced economies, the United States ranks No. 21. This may seem like a disappointing result, in exchange for so much government effort.

Here is the most recent comparative data, updated mostly to 2015 and 2016:

 

Advanced Economies: Homeownership Rates
Rank Country Ownership Rate Date of Data
1 Singapore 90.9% 2016
2 Poland 83.7% 2015
3 Chile 83.0% 2012
4 Norway 82.7% 2016
5 Spain 77.8% 2016
6 Iceland 77.8% 2015
7 Portugal 74.8% 2015
8 Luxembourg 73.2% 2015
9 Italy 72.9% 2015
10 Finland 71.6% 2016
11 Belgium 71.3% 2016
12 Netherlands 69.0% 2016
13 Ireland 67.6% 2016
14 Israel 67.3% 2014
15 Canada 67.0% 2015
16 Sweden 65.2% 2016
17 New Zealand 64.8% 2013
18 France 64.1% 2015
19 Mexico 63.6% 2015
20 United Kingdom 63.5% 2015
21 United States 63.4% 2016
22 Denmark 62.0% 2016
23 Japan 61.7% 2013
24 Austria 55.0% 2016
25 Germany 51.9% 2015
26 Hong Kong 48.9% 2017
27 Switzerland 43.4% 2015

Sources: Government statistics by country

It looks like U.S. housing finance needs some new ideas other than providing government guarantees.


Image by thodonal88

Hurricane Harvey isn’t about climate change, it’s about bad federal policy

In the wake of Hurricane Harvey, many have questioned the roles played by climate change and Houston’s loose zoning rules in the devastation that faced that America’s fourth-largest city. R Street Senior Fellow R.J. Lehmann sat down with Nick Gillespie of the Reason podcast to discuss how explicit government policy encourages people to live in harm’s way and what can be done to reverse that trend. The full audio of that conversation is embedded below.

Section 230: When should online platforms be liable for the unlawful activity of their users?

When should online platforms be liable for unlawful activity? Section 230 of the Communications Decency Act (CDA 230) generally immunizes online platforms from liability when users engage in unlawful activity, but there are several exceptions to that immunity. Still, some websites have successfully hid behind CDA 230 while sex traffickers and other criminal enterprises run rampant on their platforms. In response, several bills have been introduced in Congress that would narrow the scope of CDA 230’s immunity and expand potential liability for online platforms that harbor unlawful activity. A panel of legal and policy experts discuss the current scope of CDA 230 and what impacts the proposed amendments would likely have on law enforcement, victims of sex trafficking, and the internet ecosystem writ large.

Panelists:

Elizabeth Nolan Brown, Associate Editor, Reason Magazine

Arthur Rizer, Director of National Security and Justice Policy, R Street Institute

Berin Szóka, President, TechFreedom

Jeff Kosseff, Assistant Professor, United States Naval Academy Center for Cyber Security Studies

Stacie Rumenap, President, Stop Child Predators

Mary Graw Leary, Professor of Law, Catholic University of America

Taina Bien-Aimé, Executive Director, Coalition Against Trafficking in Women (CATW)

Arthur Rizer talks jail reform on KJZZ

R Street Justice Policy Director Arthur Rizer appeared recently KJZZ, a National Public Radio affiliate in Phoenix, Arizona, to discuss how reforms to the nation’s jail system can be the key to safer communities. Audio of the story is embedded below.

How supporting internet freedom in Cambodia makes America great

shutterstock_506069557

I’ve had the privilege of working on internet freedom issues in a range of foreign countries, but none of my partnerships abroad has meant more to me than my work in Cambodia. Which is what you’d expect when you find out that, in the course of this work over the past three years, I met Sienghom, who just this summer has become my wife.

I’ve written about my internet work in Cambodia here before. And I think Freedom House’s 2015 assessment that the internet “remains the country’s freest medium for sharing information” still holds true. That’s why I’ve generally been optimistic about Cambodia’s prospects for increasing internet freedom and democracy, as well as its increased engagement with the pan-Asian and world economies, which should lead to higher standards of living in the country generally.

It’s also why I was particularly troubled when Sienghom pointed out to me a range of disturbing news items emerging from Phnom Penh, starting just last month and continuing into this past week. The bad news started with the Cambodian government’s decision to shut down the U.S. Agency for International Development-funded National Democratic Institute in late August. NDI has focused on offering training and workshops for Cambodian politicians and would-be public servants—both in the majority Cambodian People’s Party (CPP) and in the opposition Cambodia National Reform Party (CNRP)—aimed at enabling stakeholders to function effectively and democratically in a government framework that has been edging (thanks in part to internet engagement) toward a more truly representative parliamentary democracy. In response, USAID expressed its disappointment, as did the U.S. State Department, while Cambodian Prime Minister Hun Sen—who in other decades has sought to thaw U.S.-Cambodia relations—has ramped up criticism of the United States and USAID in particular.

In August, The Cambodia Daily, an English-language independent newspaper, quoted University of New South Wales professor of politics Carl Thayer about these latest trends, saying “[a]t this point, it looks like the U.S. is losing leadership by default and China’s gaining it by design.” But this past week, The Cambodia Daily itself was shut down, ostensibly for tax reasons. This represents a new wave of government actions designed to quell not just dissent, but any criticism whatsoever. In the same few days, the government has arrested CNRP leader Kem Sokha, who is now charged with treason.

As Thayer remarked to The New York Times, ““The current crackdown is far more extensive than ‘normal’ repression under the Hun Sen regime.”

But what’s been triggering this latest wave of repression in a country that, as a U.S. ally, has been inching, not always steadily, toward democracy in recent years? Longtime observers will point you first to the last round of elections in 2013; as I wrote here in 2015:

It hasn’t helped the current government’s sense of insecurity that the 2013 Assembly election was marked by civil protest, which the government is inclined to blame, along with its slipping majority, on the rise of social media like Facebook, where individual Cambodians have felt free to share their political views.

But there’s another, more recent factor at work—namely, the messages the Trump administration has been sending to Cambodia’s leadership. One obvious message, per a report in the Phnom Penh Post, is the administration signaling its intent to cut foreign aid to Cambodia to zero. Another is President Trump’s often antagonistic relationship with the American press, which Hun Sen interprets as legitimizing his own treatment of the Cambodian press.

President Trump’s relationship with American journalists may not be improved anytime soon, but the president could reconsider whether to cut aid entirely. Understandably, Americans who feel they didn’t adequately benefit from the post-2008 economic recovery may favor the administration’s expressed commitment to disengage from (or at least reduce) the United States’ longstanding commitments to both our allies and to an international order aimed at increasing peace and promoting progress. The current “America First” foreign policy—combining promises of military strength with renegotiated trade deals—certainly resonated with these voters.

But there’s also a risk that disengagement from the role we’ve played in the international framework projects weakness rather than strength. That’s a message that can undercut the administration’s goal of a world that is “more peaceful and more prosperous with a stronger and more respected America.”

We may debate whether North Korea’s current in-your-face attitude about its nuclear weapons program has been improved or worsened by President Trump’s “fire and fury” threat last month. What’s less debatable is that the perception in many foreign countries is that the United States intends, if not to exit the world stage, then to reduce its role to a walk-on part. Whatever else that does, it doesn’t give the impression of a stronger, greater America.


Image by atdr

 

Prominent carbon tax skeptic admits it could increase economic growth

carbon tax

A lot of writing opposed to carbon taxes is, frankly, not of high quality. But there are exceptions. Bob Murphy, an economist with the Institute for Energy Research, has written some of the strongest and most sophisticated arguments for carbon-tax skepticism. So it was with interest that I read his latest broadside on the subject in the Canadian Free Press.

In the piece, Murphy focuses his ire on what might be called the nonenvironmental case for carbon taxes. Even if climate change was a hoax invented by the Chinese, a carbon tax still might be a net benefit to the economy if it allowed for cuts to more economically damaging taxes. As Murphy summarizes the case:

[W]hen proponents of a carbon tax pitch it to American conservatives and libertarians, they explain that if we have a revenue-neutral carbon tax where 100% of the proceeds are devoted to cutting taxes on capital, then reputable models show that this could boost even conventional economic growth, in addition to whatever environmental benefits accrue from reduced greenhouse gas emissions. This is called a ‘double dividend’ that arises when policymakers began to ‘tax bads, not goods.’

This sounds reasonable. And R Street has, of course, argued for swapping the corporate income tax for a carbon tax on precisely these grounds. But would it really work?

To show the limitations of the “double dividend” argument, Murphy highlights a chart from a 2013 analysis by Resources from the Future, showing the economic impact of instituting a carbon tax and using the revenue to reduce various other forms of taxation.

carbon tax

As the chart shows, it matters a lot what type of tax you are swapping out for a carbon tax. Using carbon tax revenues to offset reductions in consumption taxes, for example, would be a net negative for the economy. Swapping a carbon tax for cuts to taxes on labor would have a smaller, but still negative effect. And simply returning the money to people in the form of lump sum payments would be worst of all.

But look at the blue line. If carbon tax revenues were used to cut taxes on capital, this would result in a net increase in gross domestic product. Murphy himself acknowledges this, stating that “the RFF model shows that only if carbon tax revenues were devoted entirely to a corporate income tax cut would the economy’s growth rise above the baseline.”

That’s overstating things a bit. For example, just eyeballing the chart, it looks like a plan that used half of the revenue from a carbon tax to cut taxes on capital and the other half to cut taxes on labor would still be a net positive for economic growth, albeit not as much of a positive as if all the money went to cutting capital taxes. I’m not saying that R Street would favor such a split, just noting that you could still end up ahead economically even if not all the money from the carbon tax went to cutting taxes on capital.

And remember, the above analysis assumes no benefits to the economy from limiting climate change. To the extent that one does think there are risks from climate change that taxing carbon emissions could mitigate, it makes the case even stronger.

So why isn’t Murphy on board with swapping carbon taxes for capital taxes? Basically because he doesn’t think it’s politically realistic:

There is no way in the world that a massive new U.S. carbon tax is going to be implemented, in which all of the new revenues are devoted to cutting corporate income taxes… We can see that the ‘fashionable’ proposals that are anywhere close to actual political proposals do not consist entirely of tax cuts on corporations. For example, the recent Whitehouse-Schatz proposal, unveiled at the American Enterprise Institute, is ostensibly revenue neutral. Furthermore, one of its features is a reduction in the corporate income tax rate from 35 to 29 percent. So far, this sounds like it’s a ‘pro-growth’ measure, right?

But hold on. The Whitehouse-Schatz proposal would also use its revenues to fund a reduction in payroll taxes (but it is a flat $550 tax credit, so it lacks ‘supply-side’ incentives and acts as a lump-sum check), and to allocate $10 billion annually in grants to states to assist low-income people who will be hit the hardest by higher energy prices.

Murphy is right that the Whitehouse-Schatz proposal is flawed (we’ve written about why here). But I’m a bit surprised to hear him dismiss ideas on the grounds that they aren’t politically realistic. Murphy is an anarchist (not that there’s anything wrong with that). His preferred solution on climate is to abolish the government and have a system of private sector judges work everything out. Whatever the merits of that idea, I would submit it’s at least as unlikely as swapping a carbon tax for cuts to the corporate income tax.

More generally, lots of political ideas start out being unrealistic, only to become law later. People who advocate for Social Security privatization or drug legalization probably recognize the uphill struggle they face in advancing their views, but that hardly means they should just give up. As Milton Friedman famously said, the basic function of a policy advocate is “to develop alternatives to existing policies, [and] to keep them alive and available until the politically impossible becomes the politically inevitable.” I happen to think the time is a lot closer for revenue-neutral carbon taxes than Murphy probably does. But it’s only going to happen if people make the case.

The Equifax Hack: Time to get serious about consumer data protection

shutterstock_63746137

What’s said about money can be said about data: No one treats other people’s information the way they treat their own.

This week, Equifax—one of the “big three” consumer credit rating and reporting agencies— disclosed a massive hack that compromised the personal information of 143 million U.S. consumers. What makes this hack so damaging is that Equifax’s databases contain a motherlode of information about consumers—names, addresses, dates of birth, Social Security numbers, bank accounts, credit cards and more—all in one place.

Such hacks fuel the supply side of identity fraud and theft. Criminal hackers then sell the information wholesale via the “dark web” to other criminals who then use it to create fraudulent credit cards or other financial accounts. The “street” value of personal data goes up the more information there is to connect to a specific individual. By itself, a credit card number has a small degree of value. Add the expiration date, and the value ticks up. Add the CVV code (the three-digit number of the back of the card), and the value ticks up more. Connect it with a name and address and Social Security number and the value skyrockets.

If you’re lucky, the process ends with a phone call from a credit-card issuer asking you to verify a big-ticket purchase in a far-flung foreign capital. If not, you can find yourself debited for thousands of dollars in purchases you did not make and face years of battling with banks to clean up your credit rating. In the worst case, your personal or business bank accounts may be accessed and drained.

The Equifax hack is damaging in at least three ways: the number of records stolen, the wealth of information they contain and that, as a major credit-reporting company, consumers are obliged to use it to conduct everyday business, ranging from applying for retail credit to renting an apartment. This last point is critical, because it’s where the curmudgeonly criticism—that if you don’t want your data stolen, don’t put it online—breaks down. Consumers today increasingly have no choice but to put personal data online. The so-called “internet of things” will depend on it.

This is not meant as a slam. The internet of things will have enormous social benefits. Further development of the platform and accompanying applications should be encouraged. But a key element in making it work will be consumer confidence in the security of the personal data that’s collected as a matter of course.

This why both the government and commerce must address the Equifax hack as a significant problem. Although I tend to favor that government takes a light hand on business, there needs to be a thorough investigation as how this hack happened. Unfortunately, if the past is any indication, the Equifax hack will likely be traced to disregard of internally published cybersecurity protocols. The hacker may have been clever enough to break through a firewall, but that breach probably was aided by system information acquired by the target’s carelessness, such as:

All these and more violate best practices for data protection that can be found on any basic list of ways to safeguard data, be it on a home PC or a corporate server farm. When there’s loss because of failure to follow established standards of behavior, whether or not encoded in law, it’s negligence. And negligence is actionable.

If consumers are to remain confident in the security of their data in an environment where they are asked to share it in greater quantities, policy attitudes must change. That starts with the government realizing that cybersecurity is too big to be managed top down by a single “office” or “czar.” Responsibilities, strategies and tools must be distributed throughout the federal and state levels of government with the understanding that different hackers have different objectives. The Equifax hack was motivated by criminal profit. That means detection, prevention, regulations and response should be quite different here than for other targets, such as the Pentagon or defense contractors (espionage) and critical infrastructure (terrorism and cyberwarfare).

For one, the Equifax hack should be treated as an international organized crime problem. Solutions call for multilateral efforts with Interpol as well as other national police agencies. Treaties and accords should be pursued, but cooperation is possible without them. A model could be the Virtual Global Taskforce, an international private-public partnership of law-enforcement agencies, nongovernmental organizations and industry that has successfully targeted child pornography and child sexual exploitation.

But the private sector should be held accountable as well, especially when breaches occur because internal cybersecurity protocols and processes have been routinely ignored. Prosecutors should push for stronger penalties and judges should be reluctant to approve defendant-friendly settlements that fail sufficiently to punish a company for its carelessness.

Legislators should enact laws that guarantee baseline protection for consumers and compensation when negligence leads to loss. When a company requests or requires valuable personal data, it should be treated as under contract to do its best to protect that data. The best practices are already there. All the public needs are legislative teeth to ensure they are followed.

In the end, this transcends Equifax or any single data breach. Policymakers are still coming to grips with how the internet has exponentially increased the value of personal information. If consumers have little or no confidence in those they must entrust with it, the digital economy will be worse for it.


Image by Michael D Brown

The great Texas gas shortage

shutterstock_220853668

The great Texas gas shortage of September 2017 is over. But did it ever happen?

For me, it all began last Friday morning. As I was driving to my local coffee shop, I passed a gas station with a line of cars stretching out into the street. The next station I passed was even worse, with lines stretching around the block. The third station I passed had no line: it was out of gas completely. By the time I returned from my coffee run, the first two stations were out too.

The scene I witnessed that morning was playing out all over central and north Texas, as worries about supply disruptions from Hurricane Harvey led to the gasoline equivalent of a bank run. Worries that stations would soon run out of fuel became a self-fulfilling prophecy, as a cycle of panic buying caused shortages, leading to even more panic buying.

Soon, almost everywhere was out of fuel. One friend had to abandon their car part way between San Antonio and Austin because they couldn’t find gas. Another described the “post-apocalyptic” feel at a Buc-ee’s mega-gas station, which continued to be just as full of people as normal, but with empty pumps.

Public officials took to the airwaves to reassure people that there were no gas shortages. Whether this was true is mainly a matter of semantics. Claims that there was no shortage were correct in the sense that there hadn’t been a major disruption in supply. Texas is a big state, and much of the affected regions had escaped serious flooding. While some refineries were offline temporarily due to the storms, there was still plenty of fuel flowing.

The real problem was not falling supply so much as a spike in demand. Some of this spike was due to sheer stupidity (pictures circulated on the internet of people filling up garbage cans with gasoline; hint – don’t be that guy!). But this was only part of the problem. A bigger issue was a shift in demand. People normally wait to refill their gas tanks until they are mostly empty. Depending on the type of car and how much it gets used, a typical person might go a week or more between fill-ups. Gas stations thus ordinarily only need enough gas on any given day to fill the tanks of a small fraction of the local population.

The concerns over fuel shortages pulled much of that demand forward. Instead of waiting until the fuel light goes on, people decide to fill up with half a tank or more remaining.

In a situation like this, what is collectively irrational can be individually rational. In fact, keeping a cooler head in such circumstances can leave you worse off, as the race goes to the swift. Luckily, in this case, the situation was short-lived. It stabilized after a few days and, by Tuesday, things were mostly back to normal. The experience, however, does not bode well for what might happen in the case of a real shortage.

There is, of course, a simple way to avoid fuel shortages when you have rising demand and steady or falling supply: raise prices. Higher prices would encourage people to conserve fuel and might even have blunted the cycle of panic buying in the first place. Higher prices also would have served as a signal to bring in more fuel to meet the higher demand. One of the strange features of the whole situation for me was how little the price of gas increased, given the lengths to which people went to get it.

The answer to this is admittedly obvious. Stations were reluctant to raise prices lest they be charged with price gouging. Laws against gouging are supposed to protect consumers but, like all forms of price control, they can easily end up making consumers worse off by denying them access to the product at any price. It’s something to consider as we look to the likely strike of Hurricane Irma this weekend, and all the other storms in the months and years to come.


Image by AHMAD FAIZAL YAHYA

 

It’s crucial that STB noms support railroad deregulation

shutterstock_584438302

The Surface Transportation Board, a federal agency with broad authority over the nation’s railroads, is currently weighing a petition that could undo most of the progress made since railroad deregulation in the early 80s. That makes it particularly crucial that the Senate think long and hard about two pending appointments to the STB, which are set to come before the Committee on Commerce, Science and Transportation in the near future.

Formed in 1996 as a successor to the Interstate Commerce Commission, the STB interprets laws, promulgates rules and settles disputes related to railroads. It’s crucial that it be run by people who understand the need for a light regulatory touch, because the industry that it oversees has been a poster child of the power of deregulation.

Congress was able to achieve that substantial railroad deregulation with the Staggers Rail Act of 1980, which eliminated costly rate controls and regulatory review processes that needlessly drove prices upward. The law was an important step to ensure that privately operated railroads could sustain themselves in a competitive manner. In fact, in the decade following the passage the law’s passage, the rail industry was able to cut its costs and prices by half. By some estimates, shipping rates have dropped 51 percent since reforms went into effect.

But that all could change. Shipping interests who are reliant on moving their goods by rail are seeking a rule that would force railroads to lend their tracks to other railroads. This so-called “reciprocal switching” rule is based on a pair of faulty assumptions.

The first incorrect assumption is that rail lines are public property and should be treated the same as roads; they aren’t, and they shouldn’t be. For the most part, rail lines are owned by private firms. The second bad assumption is that railroads can’t coordinate use of each other’s rail lines on their own, even though they do it all the time.

President Donald Trump hasn’t yet made public his choices for the two STB seats that are set to be filled. It is vital that new members of the STB, whoever they may ultimately be, understand that a reciprocal switching rule would effectively re-regulate our nation’s rails. It is up to the Senate to ensure the nominees understand not only the details of the Staggers Act, but also its intent: to keep U.S. rails free and competitive.


Image by ideal_exclusive

The 9 lives of Richard Posner

2qb7o1h

The following blog post was co-authored by R Street Senior Fellow Ian Adams.


Love him or hate him, there is no disputing that Judge Richard A. Posner, who retired from the 7th U.S Circuit Court of Appeals Sept. 2, is a legend of American jurisprudence. Known for his deep knowledge of economic theory, which he regularly weaved into his opinions, Posner authored some of this generation’s the most profound rulings in the fields of antitrust, copyright and patent.

Named by President Ronald Reagan to the 7th U.S. Circuit Court of Appeals in 1981, when Posner was just 42, he later became the favorite to replace Sandra Day O’Connor on the Supreme Court in 2005. Alas, his ascent to the nation’s highest court did not to come to pass. Posner’s outspoken nature and personal disdain for the role of the high court—which he likened to “the House of Lords, a quasi-political body“—scuttled his candidacy before it could move forward in earnest.

Yet from his perch on the 7th Circuit, Posner was able to do more to develop his uniquely pragmatic and economically informed take on jurisprudence than many Supreme Court Justices accomplish during their careers. His significance as a jurist is evidenced not only by his more than 3,300 opinions as a member of the federal bench, but the fact that he became the most cited legal scholar of the 20th century. In an era defined by “purposive” and “textual” jurisprudence, Posner’s approach followed a straightforward approach: find what is right and what is wrong and express it in colloquial language familiar and accessible to those outside of the legal profession.

Naturally, strict constructionists, who aspire to hew closely to the four corners of the U.S. Constitution, saw Posner as everything that is wrong with the third branch of government. His occasionally flippant disregard for the Constitution—once going so far as to say that he saw “no value to a judge” spending any amount of time studying the Constitution’s text—could not have been better designed to trigger outrage from his colleagues and friends on the right.

Perhaps because he was largely unmoored from the past, Posner’s jurisprudence translated well to new frontiers of legal thought. Throughout his career, he was an undisputed champion for user-rights in the digital age. In 2012, he wrote that protections for copyright and especially patent had become excessive. His view was simple: when protections provide an inventor with more “insulation from competition” than needed, it will result in increased prices and distortions in the market. As more companies seek overly broad patents, the parties who suffer most are consumers.

In his essay “Intellectual Property: The Law and Economics Approach,” Posner spoke openly about his views of limiting copyright terms, the idea/expression dichotomy and fair use, as well as laying out a novel approach to piracy. He maintained that the analogy to “piracy” was born of a misconception that intellectual property is indeed physical property. In Posner’s view, if an individual who was never going to buy a copy of a registered work illegally copies the work, there is no market deficit. It’s only when pirates make and sell copies to individuals who would normally buy the work that the copyright owner is affected. Poser didn’t excuse bad actors, but applied rigid cost/benefit analysis to the parties and judicial economy.

Posner also was a thoughtful academic with a longtime appointment to the University of Chicago School of Law. He was committed to mentoring legal talent. Lawrence Lessig—famous for his work on remixed works and as creator of Creative Commonsonce clerked for Posner.  He has authored three dozen books thus far, on subjects that range from terrorism to sex. He was also the co-creator of the Posner-Becker blog, which ran until Nobel laureate economist Gary Becker’s death in 2014. The blog provided an outlet for the University of Chicago professors to muse over rulings, explore current events and show a human side to their work.

Despite this heady list of accomplishments, the single act that may garner Posner the most ongoing acclaim from law students was his hatred of the citation manual known as “Blue Book.” In his essay, “The Blue Book Blues,” he wrote—tongue firmly in cheek—that all copies of the style guide should be burned because it “exemplified hypertrophy in the anthropological sense.”

Posner’s legacy will be felt for generations to come. His opinions and his other writings make clear the law is as much a tool for learning as it is a tool for justice.

Pennsylvania should reject unconstitutional internet sales tax

shutterstock_672633394

For decades, one of the thorniest issues in all of state government has been how to be even-handed in the tax treatment of merchants who sell from within state borders versus those who market online from other places in the world. Unfortunately, an approach urged recently by the Pennsylvania Senate does not provide that balanced solution.

Under a provision added by the Senate to H.B. 542—a tax reform bill the state House passed in May—any intermediary that even merely facilitates a commercial transaction with a Pennsylvania resident would be required to collect and remit taxes, even if it lacks physical presence in the state. Legislation of this type adopted in other states has been held unconstitutional and should be rejected largely for that reason.

The bill incorporates provisions used by other states in laws that were drafted to challenge U.S. Supreme Court precedent, but this approach is both costly and unlikely to be successful. In South Dakota, a federal court recently enjoined a similar tax-remittance law that sought to extend the state’s taxing power beyond its borders, just as H.B. 542 proposes. Ultimately, by empowering Pennsylvania to collect taxes from businesses with no physical presence in the state, the rule immediately would draw the commonwealth into the potentially expensive and bitter cycle of litigation seen in other states. It’s a cycle unlikely to yield a positive result, because decades-old Supreme Court precedent makes clear that state taxing powers stop at the border’s edge.

This bill also imposes an undue burden on online marketplaces like eBay and Etsy, which are merely virtual storefronts that allow millions of small businesses to reach customers across the globe. H.B. 542 ignores the actual 21st century marketplace and creates new tax and compliance burdens not just on big internet companies, but also on craftsmen and entrepreneurs. It would be like making the King of Prussia Mall or the Millcreek Mall liable for all the sales taxes owed by its tenant stores anywhere in the country. Of course, that would be absurd.

Setting aside the bill’s obvious unconstitutionality, it would be decidedly unwise for Pennsylvania. By contributing to the erosion of borders as effective limits on state tax power, it will encourage poorly governed, tax-heavy states like California, New York and Illinois to unleash their aggressive tax collectors on Pennsylvania businesses and marketplace facilitators. Pennsylvanians could be subject to audit and enforcement actions in states all across the country in which they have no physical presence.

Moreover, citizens of the commonwealth largely oppose this tax grab. In a 2014 poll conducted by R Street and the National Taxpayers Union, overwhelming bipartisan majorities of Pennsylvania Republicans, Democrats, conservatives, moderates, liberals and independents answered “yes” to a question about whether “the internet should remain as free from government regulation and taxation as possible.” Moreover, by a margin of two to one, respondents said they opposed “federal legislation that changes how states collect sales tax from internet purchases.”

The U.S. Constitution was written to replace the Articles of Confederation, in no small part, due to the latter’s failure to prevent a spiraling interior “war” of states who could assert tax and regulatory authority outside their borders. While the Constitution’s Commerce Clause and subsequent jurisprudence make clear that taxing power must be limited by state borders, this bill seeks to wipe those limits away. The General Assembly should reject this law and avoid the ensuing legal tangle.


Image by Andriy Blokhin

 

Massachusetts’ ‘millionaires tax’ is a major misstep

shutterstock_450650347

Trying to squeeze more money out of the top income earners is a poor fiscal strategy, whether you’re looking to close budget deficits or to subsidize pet projects. Even where the tactic accomplishes its stated purposes in the short term, soaking the rich creates an unreliable revenue stream that risks driving wealthy residents, and even businesses, to other states with more accommodating tax structures.

The latest revenue-raising proposal out of Beacon Hill falls squarely into that category.

Under the Massachusetts plan, a mere 19,600 tax filers, in a state of nearly 7 million, would pay a new and higher rate. Of that fraction, a mere 900, who are projected to make more than $10 million annually, would be responsible to contribute 53 percent of new tax revenues, or roughly $1 billion of the additional $1.9 billion projected from the surtax. A smaller fraction still, the top 100 earners in the state, would see their state income taxes rise from an average of $5 million to $9.3 million annually.

The additional revenue is slated to fund transportation infrastructure and the commonwealth’s educational systems, but the promise of support rests on the none too certain assumption that those residents subject to the surtax will actually pay these higher rates for the privilege of continuing to live in the Bay State.

Analysis by the Massachusetts Taxpayers Foundation (MTF)—the state’s pre-eminent public policy organization dealing with state and local fiscal, tax and economic policies—found that if just one-third of the 900 tax filers projected to make more than $10 million annually were to relocate, total income tax revenues would drop by approximately $750 million. Such a shift would blow a hole in the budget.

It is not as though there’s no precedent for exactly this. Massachusetts enjoyed a windfall when General Electric moved its headquarters north from Connecticut. The reason for the move was clear enough. Data from the Tax Foundation, an independent tax-policy nonprofit, ranks Connecticut 43rd of the 50 states in terms of tax climate (Massachusetts ranks 27th). But one cannot help but wonder if GE would have made the move if the “millionaire’s tax” was pending, as it is now.

What the proposal lacks in policy wisdom, it also lacks in terms of a firm legal foundation. There is an open question about its constitutionality.

As written, the proposal violates the state Constitution because it is, in fact, a budget appropriation. Article XLVIII (48) of the Massachusetts Constitution lays out the guidelines for ballot initiatives and prohibits the use of such initiatives to make specific appropriations. Article 48 also mandates that ballot initiatives must have a common or related purpose. Education and transportation are unrelated matters, just as raising and appropriating funds are two separate actions. As written, this measure unconstitutionally binds voters who might want to vote for increased revenue generation, but would like it spent differently. With no precedent for this situation, it may be destined for a lengthy judicial controversy.

If, somehow, the initiative were to become law and survive judicial scrutiny, the people of Massachusetts would have real trouble undoing their mistake. Because the tax, as contemplated, would be passed in the form of a constitutional amendment, it would take a subsequent amendment to undo the millionaires’ tax. That would involve legislative approval of a subsequent constitutional amendment and a vote on the next general election ballot. In fact, should the initiative pass, the earliest a change could be made would be Jan. 1, 2023.

The need for flexibility is amplified in a region in which residents can travel from one state to the next in a matter of minutes. Consider Massachusetts’ neighbor, New Hampshire. The Tax Foundation ranks the Granite State seventh on the list of the 10 states with the best overall tax climates. New Hampshire politicos are not naïve. They certainly would work to capitalize on this misstep in “Taxachusetts” in a manner that should be familiar to our neighbors in Connecticut.

Massachusetts, and state legislatures across the country, should stop looking to the wealthy to solve budget and infrastructure woes. Even the best laid plans have unintended consequences, and targeted tax hikes on a state’s highest earners can be disruptive to both businesses and individuals. The Bay State can and should avoid the uncertainty inherent in this budgeting approach. Its long-term fiscal health depends on it.


Image by pathdoc

Hurricane Harvey shows electrical grid resiliency is key to swift recovery

shutterstock_704713648

The similarities are striking between Hurricane Katrina, which devastated New Orleans in 2005, and this week’s strike of Hurricane Harvey on Houston and Southeast Texas. Many residents of the Houston area—home to between 6 and 11 million people, depending on which way one counts—can look forward to weeks, if not months of economic and personal discomfort and hardship, as life slowly normalizes.

But even at this early date, some lessons learned over the past 12 years are resonating. The most important of these just might be the resilience of Houston’s electricity grid. Recent investments improved the grid’s operation in ways that kept electricity flowing for more than 90 percent of the area’s customers throughout the hurricane, even as more than 4 feet of water fell from the sky over two days.

By contrast, electricity didn’t return to New Orleans for weeks, a scenario that contributed directly to much of the havoc and breakdown in civil order that came to represent the post-Katrina crisis in late August and early September 2005.

It turns out that with a continued flow of electricity comes all the other attributes of modern civil life. Cell phones and land lines work, giving first responders proper direction to those most in need. Stores stay open, which depresses the likelihood of looting. People with homes that aren’t flooded can invite neighbors and displaced strangers to come to where the showers and refrigerators still work.

Houston didn’t immediately learn how to adapt its electrical grid after Katrina, but Hurricane Ike in 2012 gave the city a gut punch, taking 2 million people offline for several days. In response, the city’s power company, CenterPoint, spent nearly $500 million to reinforce the system, raising substations in low-lying areas and cutting down tens of thousands of trees along grid corridors.

As Congress meanders its way toward what could be a $1 trillion infrastructure bill in 2018, more attention should be given to grid resiliency, not just along the Gulf Coast, but everywhere where energy infrastructure is vulnerable to natural disasters.


Image by AMFPhotography

 

A wonderful confession

shutterstock_313045826

Adair Turner is an obviously very intelligent man who graduated from Cambridge University with a double first in history and economics and whose distinguished career has included being Chairman of the British Financial Services Authority, director-general of the Confederation of British Industry chairman of the Pensions Commission and becoming Baron Turner of Ecchinswell.

He begins his recent book, Between Debt and the Devil, speaking of September 2008, with a remarkable and highly instructive mea culpa, which includes the following:

I had no idea we were on the verge of disaster.

Nor did almost everyone in the central banks, regulators, or finance ministries, nor in financial markets or major economics departments.

Neither official commentators nor financial markets anticipated how deep and long lasting would be the post-crisis recession.

Almost nobody foresaw that interest rates in major advanced economies would stay close to zero for at least 6 [now 8] years.

Almost no one predicted that the Eurozone would suffer a severe crisis.

I held no official policy role before the crisis.  But if I had, I would have made the same errors.

To draw the necessary implication of this wonderful confession: If you think that the superior knowledge, foresight and wisdom of government financial regulators and central banks are going to save you from getting into trouble, you are suffering from a strange, misguided and irrational faith.


Image by Rawpixel.com

 

R.J. Lehmann: Trump policies could undermine post-Harvey rebuilding

R Street Senior Fellow R.J. Lehmann appeared recently on National Public Radio’s All Things Considered program, discussing the impact President Donald Trump’s order to rescind the Federal Flood Risk Management Standard will have in the recovery from Hurricane Harvey, as well as the tactical mistake former President Barack Obama made in selling the FFRMS as a climate change adaptation measure. The full spot is embedded below:

Presidential signing statements are declining, but why?

shutterstock_348778124

The following post was co-authored by Megha Bhattacharya, outreach and communications policy research assistant at the R Street Institute.


Earlier this month, President Donald Trump signed into law the Countering America’s Adversaries Through Sanctions Act, which strengthened sanctions against Russia, North Korea and Iran. While some observers speculated Trump might veto the bill, he was faced with veto-proof majorities in both houses of Congress, which likely forced his hand.

Even though he ultimately signed the bill, Trump issued a signing statement—the second of his presidency—claiming that several portions of the law were unconstitutional infringements on his presidential power to conduct foreign affairs.

Presidents have issued signing statements for many reasons throughout history. They can be used to criticize discrete provisions in a law, clarify how the law’s text should be interpreted or even declare a portion of the law unconstitutional. Starting with President Ronald Reagan, signing statements enjoyed an uptick in popularity among modern presidents. But by the end of the George W. Bush presidency their usage had started to decline again. A new paper by Joel Sievert and Ian Ostrander examines this drop and attempts to uncover its cause.

As Sievert and Ostrander recount, presidents traditionally have used signing statements as mechanisms to assert presidential prerogatives, including assertions that a particular piece of legislation may raise constitutional concerns. While James Monroe was the first president to issue a signing statement, the practice became more consistent during the 20th century. Their use picked up substantially during the 1980s and continued through the 2000s, culminating in a series of smaller showdowns during the Bush administration.

After Bush issued a signing statement for the Department of Defense Appropriations Act for FY 2006—objecting to provisions Sen. John McCain, R-Ariz., had inserted into the bill that restricted the use of certain interrogation techniques on enemy combatants—Congress began to more formally criticize and fight back against the use of presidential signing statements. Specifically, Congress started to convene oversight hearings regarding the practice of signing statements and even introduced legislation to regulate the president’s ability to issue such statements.

Sievert and Ostrander note that it was around this time that the Bush administration began to curtail its use of signing statements. During the first six years of his presidency, Bush issued 149 signing statements, compared to just 16 over his last two years, a trend which (for the most part) continued into the Obama administration. The authors argue that this decline can be attributed to a simple cost-benefit framework: as Congress began to push back against presidents using signing statements, the costs of issuing the statements increased significantly. As a result, presidents began to decrease their reliance on signing statements and switch to other, less controversial tools to advance presidential prerogatives.

The authors point out that presidential tools are incredibly malleable and can evolve or die out over time. Scholars have suggested that presidents increasingly have relied instead on statements of administration policy (SAPs)—which are issued while a bill is moving through Congress, rather than once it reaches the president’s desk—to take the place of signing statements.

Given the malleability of presidential tools, it raises the question of whether the debate over formal signing statements is a distraction from larger issues. As noted, presidents issue signing statements for many reasons, such as to influence how a law’s text is interpreted or to impact how an agency implements a portion of a law. But presidents can advance these goals through other means, suggesting that presidential actions rather than signing statements are where the real focus should be. For example, presidents can use surrogates or speeches to air any objections to a particular law, and they can use tools like SAPs or even internal communications to agencies to influence how a law’s text is interpreted and implemented.

Of course, the most intense debates surrounding signing statements arise when presidents use them to lodge constitutional objections to portions of a law that they don’t otherwise want to veto in totality. It remains a controversial question whether presidents can merely decline to enforce parts or all of a law they view as unconstitutional. But even in these cases, signing statements themselves take a back seat to the president’s actual on-the-ground actions.

As the Congressional Research Service’s Matt Garvey has noted: “If an action taken by a President in fact contravenes legal or constitutional provisions, that illegality is not augmented or assuaged merely by the issuance of a signing statement.” In other words, signing statements themselves matter less than whether a president takes tangible steps not to enforce portions of a law he or she finds unconstitutional or undesirable. This was seen most recently in the Obama administration’s decision not to enforce certain parts of the Affordable Care Act during its implementation stage—an action taken without any signing statement indicating that the president would do so. Garvey continues:

It can be argued that the appropriate focus of congressional concern should center not on the issuance of signing statements themselves, but on the broad assertions of presidential authority forwarded by Presidents and the substantive actions taken to establish that authority. Accordingly, a robust oversight regime focusing on substantive executive action, as opposed to the vague and generalized assertions of authority typical of signing statements, might allow Congress in turn to more effectively assert its constitutional prerogatives and ensure compliance with its enactments.

While Congress might be best served to focus its ire on presidential actions, rather than statements, it is noteworthy to see the national legislature stand up to the executive branch in any realm. Sievert and Ostrander suggest that signing statements are “one of the most recent fronts” in the power balance between the legislative and executive branches, and their decline shows Congress can act effectively to curb executive power. As Sievert and Ostrander put it, the decline in signing statements in the wake of greater congressional pushback and oversight “demonstrates that executive power does not increase monotonically or proceed inevitably toward aggrandizement.”

Whether such aggrandizement continues apace will depend on Congress’ willingness to push back against other instances of executive overreach as vigorously as it has against signing statements.


Image by OPOLJA

DOE study provides insight, despite controversy

shutterstock_344750282

The U.S. Energy Department’s much-discussed grid study, released this week amid a swirl of pre-publication controversy, offers a thoughtful and empirically based approach to examining current and future issues facing the electric grid. There’s insights here for Congress and the Trump administration’s executive agenda, and it provides a starting point for a civil dialogue on electricity policy under this administration.

Critics decried the study long before it saw the light of day, calling it, among other things, a “fake study” and pro-coal propaganda. Some of the initial concern was valid, as when Energy Secretary Rick Perry suggested possible federal intervention to prevent unprofitable coal and nuclear plants from retiring in the name of national security (for the record – there is no case for doing that).

But while reasonable people can disagree with aspects of the study, it is not reasonable to dismiss it as propaganda. Ironically, all the chuffing has made critics look hysterical. The ball is now in their court to respond productively. Taken together with President Donald Trump’s decision to reject a moratorium on coal plant retirements, this report signals that this administration is serious about pursuing market-enhancing policies.

The study sets the right tone by identifying market forces, not environmental regulations or subsidies, as the principle drivers of coal and nuclear retirements. It even approached the sore subject of workforce transition, which is a difficult but appropriate conversation as market dynamism drives creative destruction in electricity markets. It accurately notes that neither a growing amount of renewables nor a trend of baseload retirements have created problems with reliability. At the same time, force-feeding changes in the generation mix—using subsidies and mandates that outpace the ability of market design and utility planning processes to adjust—could cause those problems. The study examined the implications of regulations and subsidies with appropriate tenacity and offered a reasonable set of recommendations.

The recommendations are consistent with industry experience and empirical evidence. They emphasize improvements to price formation in wholesale markets and encourage the Federal Energy Regulatory Commission to study creating market mechanisms for essential reliability services. The report also encourages efforts to examine resiliency, a distinct concept from reliability, which is another reasonable request for electric industry stakeholders. In the report’s cover letter, Perry recognizes that it’s important consumers know a resilient grid comes at a price. This also should be encouraging, as all too often, the electric industry fails to balance the benefits of reliability and resiliency with the costs.

The report’s infrastructure development recommendations prioritizes some no-brainers long overdue for reform. It calls attention to the challenges that face fossil-fuel generators under New Source Review regulations. This Clean Air Act program can have perverse effects on environmental outcomes and create excessive industry burdens. Scholars have proposed methods to revise the program to reduce the burdens, while maintaining environmental quality. The report also offers helpful ideas for a forward-looking research and development agenda, improving the coordination of the electric and natural gas industries and areas for further research.

Addressing the plight of nuclear, the report correctly notes the need to revisit safety regulations under a risk-based approach. Similarly, hydropower regulatory reform is well past due. An R Street paper released the same day as the DOE study identified 12 priorities to ebb the flow of hydropower red tape. Our report agrees with the DOE about the need to reduce regulatory burdens on hydropower licensing and relicensing processes.

Altogether, the DOE study injects civility and reasonableness into the dialogue of our electricity future, at a time when such policy too often comes down to emotion over empirics. In particular, a cultural battle over picking favorite fuel types has resulted in a maze of distorting subsidies and preferential treatment, while distracting us from the imperative of bolstering market performance.

One can only hope that folks across the spectrum will use the DOE study to launch a sustained dialogue that guides the kinds of sensible legislative and regulatory reforms that can empower markets and consumer choice.


Image by pan demin

Indiana’s embrace of harm reduction could save lives

shutterstock_300019631

Indiana’s recent move allowing counties and municipalities across the state to approve syringe access program (SAP) operations without first obtaining state approval has been ruffling some feathers. Some even suggest that programs like Indiana’s, which provide access to clean syringes, actually increase HIV incidence.

It’s not difficult to find the evidence showing that this claim amounts to baseless fear-mongering. The overwhelming data demonstrates empirically that clean syringe access decreases HIV incidence. Indiana’s change in SAP policy reflected the real need to address the spread of HIV among injection drug users – particularly in light of the 2015 outbreak of HIV in the state’s Scott and Jackson counties. The Indiana General Assembly and Gov. Eric Holcomb did the right thing by passing and signing H.B. 1438.

But to see the data through an objective lens, it’s important first to recognize whom it is that harm-reduction programs and services help. They are our neighbors, family members, colleagues and friends – members of our community.

Opposition to increased availability of SAPs often is based on a fundamental misunderstanding of the philosophy of harm reduction. It does not hold that people who use drugs are incapable of making healthy choices, leaving us no choice as a society but to enable them. To the contrary, harm reduction is based on the premise that people want to make choices that promote their health, even if they are unwilling or unable ultimately to quit using drugs. The people who take time out of their day to get clean needles, naloxone and condoms, or who attend clinic events, prove that proposition.

Syringe access programs work. Drug users who do not have needles are far more likely to share than those who do; it’s that simple. The availability of SAPs in Vancouver, British Columbia, helped decrease needle sharing among HIV-positive injection drug users from 37 percent in 1996 to 2 percent in 2014. In fact, difficult access to clean needles makes it 3.5 times more likely that a person will share needles, while access to needle-exchange programs makes it less than half as likely that a person will share a needle.

Extending these findings to incidence of HIV supports the idea that decreased needle sharing results in decreased transmission of infectious diseases. In New York City, syringe-exchange rates have correlated strongly with decreases in HIV incidence. In 1992, when 750,000 clean needles were distributed, HIV incidence rates were at 3.7 per 100 person-years. Just 10 years later, HIV incidence rates had fallen to 0.75 per 100 person-years, after needle distribution increased to 3 million.

Similar results can be found globally. In Dublin, Ireland, there was a 24.2 percent decrease in hepatitis C prevalence. In Lang Son, Vietnam, HIV prevalence dropped from 41 to 27 percent among injection drug users following the implementation of SAPs. Studies comparing clean needle distribution in Scotland, Ireland, England, China, France, Spain, Quebec and Australia all support the idea that decreased needle sharing decreases transmission of both HIV and hepatitis C virus.

SAPs are also cost effective. In fact, they’ve been cited as one of the most cost-effective public health interventions ever funded. It is calculated that it costs an SAP between $4,000 and $12,000 to prevent one HIV seroconversion, which is far below the estimated $385,000 it costs to treat one diagnosis of HIV. In 2008, Washington, D.C., allocated $650,000 of municipal revenue to fund SAPs. It’s estimated that, within two years, this policy change averted 120 new cases of HIV, for a projected cost savings of $44 million.

Opponents of harm reduction programs often suggest that treating addiction is, perhaps, a better long-term goal. There, we can agree. Several analyses of existing programs conclude that, rather than tacitly tolerating drug use and allowing addiction to take over communities, harm reduction programs actually correlate with increased entry into treatment. In Baltimore, people who visit SAPs are more likely to enter treatment that those who did not. In Seattle, people who use SAP services are more likely to remain on methadone treatment.

This result makes sense. Harm reduction programs, including SAPs, provide a point of intervention where a person who uses drugs can interact with a nurse, a counselor or volunteers who have the best interests of the person in mind. This is true both in the short term, with regard to providing clean syringes, and over the long term, where such services can provide options for treatment programs and facilities. This is the bare minimum we owe to our friends, neighbors and family members who struggle with this horrendous disease.


Image by sumroeng chinnapan

The Future of Traditional Urbanism: Conservatism in cities and towns

In a joint forum with The American Conservative magazine, the R Street Institute hosted a pair of July 31 panels at the Hillsdale College Kirby Center in Washington, D.C., to explore the conservative case for traditional urbanism, and outline the way forward for responsible development of U.S. cities and towns.

In the first panel—which also featured Gracy Olmstead, associate managing editor of The Federalist; Jason Segedy, director of planning and urban development for the City of Akron, Ohio; and Lewis McCrary, executive editor of The American Conservative—R Street Visiting Senior Fellow Jonathan Coppage made the case that conservatives should acknowledge and be concerned about the ways that decades of wrong-headed government planning often served to destroy architecture, cultures and vibrant neighborhood institutions that are the lifeblood of American cities.

Jon also moderated a second panel featuring New York Times columnist Ross Douthat; Benjamin Schwarz, national editor of The American Conservative; and Aaron Renn, senior fellow at the Manhattan Institute.

Full video of both panels is embedded below:

Ohio’s opioid agony

shutterstock_590129288

As college football season looms, hope springs eternal here in Columbus that our premier land-grant institution of higher learning will be atop the pile at the end of a glorious, or even a workmanlike season.

Unfortunately, Ohio currently is instead leading the pack in the metrics for two national crises: at-risk student loans and drug overdose fatalities. My Buckeye state experienced nearly 17,000 deaths from drug overdoses from 2010 to 2016, and led the nation last year with more than 4,050. The Ohio Department of Health is still compiling the data, so the total could even go higher still.

I have attended four national conferences of state legislators this summer, and these lawmakers are well-aware that the street drug problem has gotten much worse in the last couple of years, with the addition of fentanyl to many of the traditional offerings. As a synthetic opioid, it can be manufactured and transported anywhere. It can reportedly even be bought online, if one knows where to look. It’s easy to understand the dangers posed by a substance 50 times as powerful as heroin, which currently is being mixed with other street drugs—cocaine, heroin and even marijuana—to hook users.

To illustrate how “powerful” fentanyl is, the story of Patrolman Chris Green—an East Liverpool, Ohio officer—made national news in May when he overdosed from wiping just a small amount of fentanyl off his shirt. He had worn gloves and a mask which searching a stopped vehicle, but got it on his skin later when he brushed off. He had to be administered multiple doses of naloxone before he recovered.

The worst stories, in terms of the major public policy problems that surround opioids, are stories of people who have had to be administered lifesaving antidotes up to three separate times in the same 24-hour period. What can be done about those so determined to injure themselves is certainly beyond the scope of this piece, and is going to require a level of resources that is difficult to imagine.

There are 41 drug-related task forces in Ohio, and law enforcement is overwhelmed by the task of trying to “serve and protect” our citizens whose lives are dictated by access to opioids. John Born, director of the Ohio Department of Public Safety, was quoted in the Columbus Dispatch recently claiming that 20 kilos of fentanyl were taken off the streets in Ohio last week. Gov. John Kasich issued a plan in March limiting prescriptions of opioid analgesics to seven days for adults and five days for children. Ohio’s lawmakers are tackling the problem in every way they can imagine, as President Donald Trump mulls whether to declare a “national emergency.”

An extra $20 million was added to the Ohio biennial budget to be used for innovation in opioid product development, an area where there had been some progress recently. At the epicenter of the problem, scientists are working as hard as policymakers to mitigate the challenges.

The use of opioid analgesics for chronic and acute pain management—including neuropathic pain and post-surgical pain—has become commonplace and, while effective, can have unwanted side effects. The most extreme of these side effects is addiction, which people can transition to following recreational use or misuse and abuse of these drugs.

Reported separately in the Sunday edition of the Columbus Dispatch was a hopeful report on a novel delivery system. The company formed to manufacture this new product was initially funded by Ohio’s Third Frontier program, a ballot issue approved by Ohioans several years ago to facilitate generation of innovative products to the marketplace.

A pellet about half the size of a grain of rice delivers an analgesic that is designed to be nonaddictive. In a trial of 55 people, this analgesic has been injected into the intrathecal space of the lumbar spinal cord to mitigate pain from sciatica for up to a year with no major side effects. If it passes all the tests for efficacy and safety, this could offer at least one solution to one part of the problem we face as a nation.


Image by tab62

 

War for the Web: Countering ISIS and violent extremism online

In the wake of the recent terrorist attacks in London, U.K. Prime Minister Theresa May has been at the forefront of international calls for technology companies to do more to combat online extremism. The British government announced its intent to stamp out extremism “in all its forms, both across society and on the internet.”

In the United States, the Department of Homeland Security just announced a $10 million two-year grant to organizations that work to improve cybersecurity and thwart terrorism. Countering violent extremism, specifically online, requires taking proactive steps to halt extremist groups from being able to recruit and radicalize followers worldwide. This effort, now more than ever, requires increasing cooperation between the private, public and academic sectors, among others. For their part, tech companies have been experimenting with new techniques and guidelines.

These are complex issues at the intersection of freedom of expression and national security. How will all of the proposed changes and solutions express themselves online, domestically and abroad? How do these efforts to identify and prevent early online radicalization square with the First Amendment and notions of freedom of expression?

Arthur Rizer, R Street’s director of national security and justice policy, took part in a July 21, 2017 panel discussion on these and related issues hosted by the Advisory Committee to the Congressional Internet Caucus. Other panelists included Kevin Adams of the British Embassy, Alexander Meleagrou-Hitchens of George Washington University’s Project on Extremism, Mark MacCarthy of the Software & Information Industry Association and Clara Tsao of DHS’ Countering Violent Extremism Task Force.

Video of the discussion is embedded below:

Trump wisely rejects emergency order for coal

shutterstock_519756799

The Trump administration this week confirmed it has rejected a coal industry request seeking an emergency order for a two-year moratorium on coal-plant closures. This avoids what would have been an unprecedented and economically damaging intervention in electricity markets, without even the benefit of greater reliability. The move marks a sharp break from the all-tools-considered approach to reinvigorating coal, as the president reportedly had previously committed to the measure in private conversations with industry executives.

The Federal Power Act grants the U.S. Energy Department emergency authority to order continued operations of power facilities. In April, Energy Secretary Rick Perry announced the possibility of federal intervention to protect coal and nuclear plants in the name of national security, which would pre-empt state policies. The announcement coincided with the launch of an Energy Department study on so-called “baseload” power-plant retirements.

A massive moratorium on power-plant closures, especially those brought about by market forces, would heavily distort electricity markets and deter, if not outright freeze, new capital investment. Fatally undermining an investment climate could paradoxically worsen energy reliability by undermining the price signals that competitive electricity markets use to meet reliability requirements. Furthermore, using a national security mechanism when there is no national security concern would be an abuse of the emergency authority. Doing so while overriding the states would also leave a deep federalist scar.

This may even beat coal-production subsidies as the worst energy policy idea. Fortunately, many productive energy-policy corrections are on the table for the administration.

A reset on coal policy should be consistent with market principles, not a form of reverse-industrial policy to counter the prior administration’s favoritism to renewables. Thoughtful deregulation is an appropriate approach. So is lifting restrictions on coal exports or international financing for coal development. But subsidies and knee-jerk responses—a protectionist emergency order being the worst among them—would be deeply damaging and harm the economy.

One hopes this is a sign the president and his senior energy advisors recognize that economic transitions are necessary and healthy when they are supported by market forces. Coal’s biggest foe is shifting market fundamentals – namely inexpensive natural gas and declining demand. Subsidies for coal’s competitors is a lesser factor and the administration should deal with those in a manner that predictably and sustainably reduces the subsidy regime, not further entrenches it.

The surge in coal-plant retirements this decade was due mostly to a combination of environmental rules and market forces, with the latter being the main driver going forward. The mid-Atlantic region experienced more than 20 gigawatts of coal retirements already (equivalent to about three-quarters of New England’s peak demand). Markets facilitated new resources to take coal’s place. As leading industry economists note, the emergence of these alternative resources has been surprisingly robust and posed no clear reliability concern. Overall, most electric reliability metric trends are stable or improving.

Clearly, the doomsday reliability claims (e.g., coal retention as a national security issue) of some uncompetitive industries have gone unfounded. Still, achieving continued reliability requires market rules and monopoly-utility-planning processes to evolve, as unconventional resources become more economical. The administration can aid this by listening to industry experts, not the desperate claims of rent-seeking industry members.

The dismissal of a blatantly anti-market idea could, one hopes, point the way toward a more refined approach for this administration’s energy policy. The forthcoming U.S. Energy Department study has much potential to assess the regulatory and market environment fairly and to suggest market-enhancing improvements. Further work to improve the alignment of wholesale electricity market rules with electric reliability requirements is one such path to let markets, not government, decide the fate of the coal industry and all other power sources.


Image by Rudmer Zwerver

 

Trump’s ‘energy dominance’ strategy starting to crack Eastern European markets

shutterstock_547606072

The U.S. Energy Department announced Aug. 21 that a cargo ship full of Pennsylvania coal would be sailing out of Baltimore, 5,600 nautical miles across the Atlantic Ocean, Mediterranean and Black Seas to Ukraine, the first such shipment of its kind.

Such shipments hold significance in a variety of ways, and offer a possible window into the Trump administration’s desire to use energy trade to offset the aggressive geopolitical behavior of Russian President Vladimir Putin. In March, Ukraine cut off deliveries of coal from the Russian-controlled region of Donbass, where much of Ukraine’s coal industry resided before the 2014 conflict between Ukraine and Russia began.

Centrenergo, the Ukrainian power utility, has been struggling since March to replace the blocked coal supplies. Now, Latrobe, Pennsylvania-based XCoal Energy will send 700,000 tons of anthracite coal to Odessa over the next several months. The agreement follows a June meeting between President Donald Trump and Ukrainian President Petro Poroshenko and U.S. President Donald Trump.

This week also marks the first shipment of liquefied natural gas from the United States to the Baltic state of Lithuania, a former Soviet satellite, which until recently was completely dependent on Russian gas supplies. Lithuania and the other Baltic states of Latvia and Estonia have at times been under withering political pressure from their former Cold War patrons, particularly since Moscow occupied the Crimea in 2014.

Trump’s visit to Poland in early July included a speech that highlighted his administration’s desire to exert a counter-force on Russia through the energy markets, “so that you can never be held hostage to a single supplier,” Trump said.

Poland received its first U.S.-based shipment of LNG in July, and the single U.S. LNG export terminal at Sabine Pass on the Texas-Louisiana border has sent out more than 160 cargoes since starting up in February. Several of the cargoes have reached Spain, Italy, the United Kingdom and the Netherlands in the past 18 months.

U.S. LNG export capacity is set to grow five-fold by 2020, potentially making Eastern European economies awash in natural gas, just as many of the long-term delivery contracts signed between Eastern Europe and Russia’s natural gas monopoly Gazprom are set to expire in the early 2020s.


Image by Anatoly Menzhiliy

 

Rep. Meadows introduces bill to lock in regulatory budgeting

shutterstock_591208709-(1)

Those who champion slowing the growth of the regulatory state earned a victory earlier this year with President Donald Trump’s “two-out-one-in” executive order, requiring federal agencies to eliminate two old regulations for every new one they enact. The order also established a type of regulatory budget that caps the amount of regulatory costs agencies can impose on the economy during a given year.

But as R Street previously has argued, such executive branch actions, particularly in the area of deregulation, are unlikely to be lasting unless they are codified. Codification ensures that deregulatory efforts are locked in and not subject to reversal by a future president.

Toward that end, the latest good news is that H.R. 2623, legislation that effectively would codify Trump’s order, has been introduced in the U.S. House by Rep. Mark Meadows, R-N.C., chairman of the House Freedom Caucus.

Unlike past regulatory budgeting legislation, Meadows’ bill would not task Congress with setting the regulatory budget, instead granting that responsibility to the White House Office of Management and Budget. While such a structure may be the best short-term option to codify a regulatory budget, Congress ideally would be the branch responsible to set how much regulatory costs agencies could impose each year. A further concern is ensuring that OMB has the resources and manpower necessary to institute the regulatory budget.

Regardless, the Meadows bill should be welcomed as a step toward a more sustainable deregulatory effort.


Image by Maythaphorn Piyaprichart

New internet tax threatens privacy of Washington customers

shutterstock_255592126

In their zeal to shake a few more tax dollars out of Washington residents’ pockets, state lawmakers are brushing aside legitimate privacy concerns raised recently by civil-liberties groups. Under the new internet sales-tax law signed by Gov. Jay Inslee last month, the Washington Department of Revenue could learn more than most of us want it to know about our online purchases.

State officials vow the information provided to them by online retailers to facilitate the collection of the so-called “use” tax will be held in the utmost confidence. But from police agencies to the Internal Revenue Service, government bureaucracies have far from an unblemished record when it comes to protecting private records.

If you’ve bought nothing weird, then maybe you’ve got nothing to hide. And maybe the retailers, or third-party websites like eBay will do the state’s bidding and collect the tax for the department without turning over any information. Or maybe not. The new law gives out-of-state sellers the option to “voluntarily” collect Washington sales taxes or to provide the names, addresses and purchase information to the revenue folks in Olympia. As a consumer, the decision won’t be yours to make.

The stores must provide purchase amounts rather than a list of specific items. But this can be small comfort for those who patronized, e.g., a mental-illness center, a paraphernalia shop or a company that sells sex toys. Current law requires sellers with a brick-and-mortar presence in the state to collect taxes from in-state consumers, but the tax collectors say they want to “level the playing field.” Your privacy wasn’t much of a concern when they passed the law.

Even if your raciest online purchase is a calendar with cutesy cat photos, you ought to be concerned about the costly implications. There are your personal costs. The new law is, after all, a tax increase on Washington residents’ purchases. Then there’s the likely cost to the general fund, as state officials defend it for years in the federal courts. The state balanced its budget based on revenue assumptions from the tax (an estimated $1 billion over the next four years), but those collections will be put on hold through the length of the trials.

Lawmakers are confident that they are on solid legal ground, because the federal 10th Circuit Court of Appeal, after six and a half years of litigation, upheld Colorado’s internet sales tax law. The U.S. Supreme Court recently refused to review the Colorado decision, which cleared the way for that state to begin collections from out-of-state online customers.

But there’s no guarantee the federal 9th Circuit Court of Appeals, which oversees Washington state matters, will reach the same conclusion. There are significant other differences between the Washington and Colorado laws, even though Washington legislators used Colorado as a model. Those, too, can lead to a different outcome.

Both states require sellers that don’t collect the sales tax to provide personal information about online purchasers to their respective revenue departments. But the Washington law applies to companies that gross more than $10,000 a year in sales to in-state residents, whereas Colorado’s threshold is 10 times higher. “Washington’s puts more responsibility on so-called ‘marketplace facilitators’ and other internet ‘middlemen,'” according to a Tacoma News Tribune report.

That Colorado case centered on the U.S. Constitution’s Commerce Clause, governing business among the states. The Direct Marketing Association challenged the law based on a seminal 1992 U.S. Supreme Court ruling (Quill Corp. v. North Dakota) that state officials can only collect sales taxes from a business if they have a physical presence in that state. For instance, Seattle-based Amazon has long collected taxes for sales to residents living in Washington.

But the 10th Circuit ruled that “Quill applies only to the collection of sales and use taxes, and the Colorado law does not require the collection or remittance of sales and use taxes. Instead, it imposes notice and reporting obligations.”

Nevertheless, there are many reasons to question the Washington law. The $10,000 threshold imposes a burden on small businesses, given that they will need to maintain detailed reports on buyers in the state. If this law withstands court scrutiny, similar tax schemes will spread like a bad internet rumor. Even the tiniest enterprises, located here and elsewhere, will have to collect data to meet the varied demands of 50 state revenue offices – or face $20,000-plus penalties.

All U.S. internet companies, not just the small ones, will face disadvantages given that Olympia’s tax grabbers will not be able to enforce the statutes on sellers based in Shanghai or New Delhi. And Commerce Clause arguments won’t be the only ones that might tie it up in court.

In 2010, the U.S. district court in Seattle rebuffed efforts by the North Carolina secretary of revenue to receive detailed purchase information from Amazon as part of a tax audit of North Carolinians’ purchases. The court found that “citizens are entitled to receive information and ideas through books, films, and other expressive materials anonymously.”

That’s true even when the government is seeking the information for tax purposes rather than to censors. Do we really trust any government agency with such personal information? Unfortunately, we’re now at the mercy of the courts and Congress to protect such privacy rights.


Image by koya979

 

The FDA’s words and actions do not match

shutterstock_487676782

In a new paper in the New England Journal of Medicine, newly minted Food and Drug Administration Commissioner Scott Gottlieb and Mitch Zeller, longtime director of the FDA Center for Tobacco Products, commit themselves to a science-based regulatory framework that takes into account “the continuum of risk for nicotine-containing products” to reduce tobacco-related addiction, illness and death. To do this, the paper promises an FDA commitment to reduce nicotine levels in cigarettes to “non-addictive levels” and to “foster innovation in less harmful nicotine delivery.”

The proposal to reduce the nicotine content of cigarettes is based on two recent papers by Eric C. Donny and colleagues – one that also appeared in the NEJM and another that appeared in the journal Preventive Medicine. The Preventive Medicine paper outlines the issues that need to be addressed for such regulations to be developed. The NEJM paper details a six-week trial in which smokers were given free low-nicotine e-cigarettes to use, in addition to their usual cigarettes. The trial showed that smokers of low-nicotine cigarettes did not smoke more cigarettes on the last day of the trial than they had on the first day. Neither paper provides a sound basis to replace traditional cigarettes as we know them with a product that does not deliver significant amounts of nicotine.

For the past six years that Zeller has directed the Center for Tobacco Products, he has spoken of a “continuum of risk” and said he favored “fostering innovation.” Unfortunately, what has been reflected in FDA policy is direct opposite of those notions. The FDA has imposed no regulatory burden on cigarettes that wasn’t already imposed before adoption of the Tobacco Control Act of 2009. Meanwhile, the FDA continues to impose nearly impossible-to-meet requirements to approve any new product to the marketplace, including those that claim lower risk than cigarettes.

Despite overwhelming evidence that e-cigarettes are far lower in risk than cigarettes, and that they do not recruit teens to nicotine who otherwise would not have smoked, the FDA has done nothing to confirm or deny these research findings. Indeed, it has just announced a new anti-e-cigarette campaign.

The smokeless tobacco products currently on the American market have long been known to be far lower in risk than cigarettes. The FDA has done nothing to share this information with the public, and has even proposed a new set of smokeless tobacco regulations that threaten to remove almost all current smokeless products from the market.

FDA regulations continue to mandate that even the smallest change to any tobacco-related product now on the market would require immediate removal of that product, pending a new costly application for FDA approval of the modified product. So much for encouraging innovation.

The bottom line is this: neither science nor the fine words in this latest NEJM piece have anything to do with FDA policies that continue to protect cigarette sales and profits from competition from lower-risk nicotine-delivery products.


Image by Gustavo Frazao

 

How big a bank is too big to fail?

shutterstock_620476235

The notion of “too big to fail”—an idea that would play a starring role in banking debates from then to now—was introduced by then-Comptroller of the Currency Todd Conover in testimony before Congress in 1984. Conover was defending the bailout of Continental Illinois National Bank. Actually, since the stockholders lost all their money, the top management was replaced and most of the board was forced out, it was more precisely a bailout of the bank’s creditors.

Continental was the largest crisis at an individual bank in U.S. history up to that time. It has since been surpassed, of course.

Conover told the House Banking Committee that “the federal government won’t currently allow any of the nation’s 11 largest banks to fail,” as reported by The Wall Street Journal. Continental was No. 7, with total assets of $41 billion. The reason for protecting the creditors from losses, Conover said, was that if Continental had “been treated in a way in which depositors and creditors were not made whole, we could very well have seen a national, if not an international crisis the dimensions of which were difficult to imagine.” This is the possibility that no one in authority ever wants to risk have happen on their watch; therefore, it triggers bailouts.

Rep. Stewart McKinney, R-Conn., responded during the hearing that Conover had created a new kind of bank, one “too big to fail,” and the phrase thus entered the lexicon of banking politics.

It is still not clear why Conover picked the largest 11, as opposed to some other number, although he presumably because he needed to make Continental appear somewhere toward the middle of the pack. In any case, here were the 11 banks said to be too big to fail in 1984, with their year-end 1983 total assets – which to current banking eyes, look medium-sized:

alex chart

If you are young enough, you may not remember some of the names of these once prominent banks that were pronounced too big to fail. Only two of the 11 still exist as independent companies: Chemical Bank (which changed its name to Chase in 1996 and then merged with J.P. Morgan & Co. in 2000 to become JPMorgan Chase) and Citibank (now Citigroup), which has since been bailed out, as well. All the others have disappeared into mergers, although the acquiring bank adopted the name of the acquired bank in the cases of Bank of America, Morgan and Wells Fargo.

The Dodd-Frank Act is claimed by some to have ended too big to fail, but the relevant Dodd-Frank provisions are actually about how to bail out creditors, just as was the goal with Continental. Thus in the opposing view, it has simply reinforced too big to fail. I believe this latter view is correct, and the question of who is too big to fail is very much alive, controversial, relevant and unclear.

Just how big is too big to fail?

Would Continental’s $41 billion make the cut today? That size now would make it the 46th biggest bank.

If we correct Continental’s size for more than three decades of constant inflation, and express it in 2016 dollars, it would have $97 billion in inflation-adjusted total assets, ranking it 36th as of the end of 2016. Is 36th biggest big enough to be too big to fail, assuming its failure would still, as in 1984, have imposed losses on hundreds of smaller banks and large amounts of uninsured deposits?

If a bank is a “systemically important financial institution” at $50 billion in assets, as Dodd-Frank stipulates, does that mean it is too big to fail?  Is it logically possible to be one and not the other?

Let us shift to Conover’s original cutoff, the 11th biggest bank. In 2016, that was Bank of New York Mellon, with assets of $333 billion. Conover would without question have considered that—could he have imagined it in 1984—too big to fail. But now, is the test still the top 11?  Is it some other number?

Is $100 billion in assets a reasonable round number to serve as a cutoff? That would give us 35 too big to fail banks. At $250 billion, it would be 12. That’s close to 11. At $500 billion, it would be six. We should throw in Fannie Mae and Freddie Mac, which have been demonstrated beyond doubt to be too big to fail, and call it eight.

A venerable theory of central banking is always to maintain ambiguity. A more recent theory is to have clear communication of plans. Which approach is right when it comes to too big to fail?

My guess is that regulators and central bankers would oppose anything that offers as bright a line as “the 11 biggest”; claim to reject too big to fail as a doctrine; strive to continue ambiguity; and all the while be ready to bail out whichever banks turn out to be perceived as too big to fail whenever the next crisis comes.


Image by Steve Heap

 

PACER might be the government’s worst website

shutterstock_472129663

The following is a guest post by Tom Lee, former chief technology officer for the Sunlight Foundation.


When hackers are able to steal your money, it’s usually safe to call that a website’s least appealing feature. Astonishingly, that’s not true of PACER—the Public Access to Court Electronic Records system, run by the Administrative Office of the Courts—which charges for downloads of federal essential court records. In its case, hackability comes second to the bad and perhaps even illegal deal that it offers the public.

The exploit is real, mind you. The good people of the Free Law Project uncovered it months ago as part of their work to democratize legal information. Now that PACER has patched the vulnerability, FLP has disclosed the gory details.

The problem revolves around a cross-site request forgery attack. When you connect to a website, it’s normally able to store small amounts of data called “cookies” on your computer. Any time your browser makes a request to that site, it will send those cookies, along with the request. Sites can tell if a request comes from a logged-in user by examining the request for unique cookie values that were set after a successful authentication attempt and comparing those values to copies stored in the site’s database.

Code running on a different malicious website that you visit can’t look at the cookies of other websites. But it can make requests to other websites, and those requests will carry the other sites’ cookies. If those cookies identify a logged-in user, the malicious site can make invisible requests that trigger real actions on that user’s behalf on the target site.

There are standard ways to detect and defend against this, but PACER hadn’t used them. Although there is no proof that it happened, a malicious site could have made requests on behalf of logged-in users, downloading documents and racking up fees.

That’s bad. But it’s not the worst thing about PACER—that would be the fees themselves. PACER makes some kinds of documents free, but for many others, it charges 10 cents per page. Barring some truly incredible technical mistakes, that number is vastly more than the cost of serving a page of content. And it has remained at that level for many years, despite advancing technology and falling bandwidth and storage costs.

Legal actions often involve huge page counts, which means that PACER fees add up. And they render some kinds of research and scholarly work totally impractical.

Even worse, those fees might be illegal. The Administrative Office of the Courts is barred by the E-Government Act of 2002 from charging more for PACER than it costs to maintain the system. But there is evidence that AO is not in compliance with the law. In 2014, PACER collected $145 million in fees. Five years earlier, it had been projected to cost $30 million per year to maintain. Many suspect that PACER fees are being used to subsidize other line items in the agency’s budget.

A class-action lawsuit is underway that aims to untangle all of this; if you used PACER between 2010 and 2016, you might be a part of it. But even if you’re not, you can still help to democratize the system’s information. Since the government doesn’t hold copyright over PACER records, there’s nothing stopping you from sharing them with the world after you pay your 10 cents per page. The RECAP project is run by the Free Law Project and Princeton University’s CITP program, and provides browser extensions that automate and centralize this process. It will let you download records from the RECAP archive when they’re available, or contribute newly purchased PACER records to the archive automatically when they’re not.

PACER doesn’t charge for balances less than $15 per quarter, so if you’re feeling civic-minded, why not download RECAP, make a PACER account and liberate some court records for the public good? Now that they’ve patched their vulnerability, it might even be safe to do so.


Image by fizkes

 

How messaging smart flood planning as ‘climate’ policy led to its demise

shutterstock_604954820

When a bunch of reporters called me to discuss President Donald Trump’s decision to turn back Obama-era flood protection standards, I was happy to criticize the administration, because I think the standards were one of the few unalloyed good things the Obama administration did. They’re a clear message from the federal government that federal taxpayers won’t pay to build in flood-prone areas and will build infrastructure designed to stand up to nature.

The Federal Flood Risk Management Standards, promulgated by a January 2015 executive order, drew on the principles of President Ronald Reagan’s great Coastal Barrier Resources Act which forbade development subsidies for barrier islands and barrier beaches while leaving the private sector free to do as it pleased. This is a great policy.

But as I wrote in the Weekly Standard not long after the standards came out, the Obama administration made a serious political (and, arguably, factual) error by choosing in their public statements to label the standards a climate-change-adaptation measure. Now, it’s absolutely true that greenhouse gas emissions have resulted in thermal expansion of seawater and some ice melt in polar regions. These factors (mostly the former) have resulted in sea-level rise. This results in more flooding. In fact, an increase in “sunny day” flooding is one of the very few easy-to-observe widespread phenomena that we can link to greenhouse gas emissions in a convincing fashion.

That said, the areas most at-risk now and in the near future are almost all places where climate change isn’t the dominant concern. Changes in the levels of continental plates, as well as land loss caused by hydrological projects and other human activity, can have local impacts hundreds of times larger than those caused by global warming. Purely natural processes like erosion and seasonal plant growth also can change which particular areas will flood, how badly and how often. In any given area, these factors can be far more likely to make the difference than sea-level rise, which generally proceeds at a scale noticeable only after decades have passed. The folks who wrote the Obama executive order—I talked with them a bunch—knew this well and wrote the order in a neutral fashion to deal with whatever was causing flooding.

In its press statements and publicity, however, the Obama administration insisted on positioning the EO as a response to climate change. While any number of factors—including a genuine desire to cut red-tape surrounding infrastructure projects, pressure from builders and his own career as a real-estate developer—each played a role in Trump’s decision to rescind the order, I can’t help but think that a simple distaste for anything the Obama administration labeled as “climate policy” may have been the driving motivation to repeal the standards.

In part because climate change policy has become such a political hot potato—and because so many on the left have turned it into a culture war issue—focusing on climate change was clearly the wrong move for the Obama administration. As a result, the wrong messaging may have contributed to a very unfortunate policy decision.


Image by MaryValery

 

Clark Packard talks NAFTA renegotiation on Fox

clark

President Donald Trump, Mexican President Enrique Peña Nieto and Canadian Prime Minister Justin Trudeau are set to meet in Washington today, and over the next three days, for the first round of talks to renegotiate the North American Free Trade Agreement.

R Street Trade Policy Analyst Clark Packard, who back in June co-authored R Street’s comments to the Office of the U.S. Trade Representative on the subject of NAFTA renegotiation, discusses the history of the agreement, its benefits and ways it still could be improved in a new FoxNews.com video profile, embedded below.

Andrew Heaton on how to stop patent trolls

In his latest video for Reason’s Mostly Weekly series, R Street Associate Fellow Andrew Heaton takes on the subject of patent trolls and what to do about them — particularly in light of a recent decision by the U.S. Court of Appeals for the Federal Circuit that Personal Audio LLC doesn’t own the patent on the entire podcasting industry.

Kosar talks postage rates on APM’s Marketplace

R Street Vice President of Policy Kevin Kosar appeared on American Public Media’s Morning Marketplace program to discuss efforts by the U.S. Postal Service to have more flexibility to raise rates without congressional approval, and how that could cross-subsidize businesses where they compete directly with the private sector.

R Street’s voting guide for SXSW panels!

vote

VOTING ENDS AUGUST 25!!!!

We’ve put together some great policy panels for next year’s SXSW conference in Austin, Texas. BUT WE NEED YOUR HELP to get in the final conference program!

Please vote for us and help bring free-market ideas to Austin’s annual gathering of technologists, activists and entrepreneurs.

Panels featuring R Streeters:

Global Ecosystems and the Policies that Support Them: CLICK TO VOTE!

  • Featuring: Melissa Blaustein, founder and CEO, Allied for Startups;
  • Zach Graves, tech policy program director and senior fellow, R Street Institute;
  • David McCabe, technology reporter, Axios; and
  • U.S. Rep. Blake Farenthold, R-Texas.

How Scientology and Porn Shaped the Internet: CLICK TO VOTE!

  • Featuring: Sasha Moss, technology policy manager, R Street Institute;
  • Christian Dawson, co-founder and executive director, Internet Infrastructure Coalition (i2C); and
  • Aaron Perzanowski, professor of law, Case Western Law School.
  • Katie Oyama, Google.

RoboCop: Is Artificial Intelligence the Future of Criminal Justice? CLICK TO VOTE!

  • Featuring: Arthur Rizer, national security and justice policy director, R Street Institute;
  • Ryan Calo, assistant professor, University of Washington School of Law;
  • Heather West, senior policy manager, Americas principal, Mozilla; and
  • Vikrant Reddy, senior research fellow, Charles Koch Institute.

Virtual Reality Codes of Conduct in the Virtual Wild West: CLICK TO VOTE!

  • Featuring: Anne Hobson, associate fellow in technology policy, R Street Institute;
  • James Hairston, head of public policy, Oculus;
  • Alexis Kramer, legal editor, Bloomberg BNA; and
  • Matthew Schruers, adjunct professor of law, Georgetown University Law Center.

We’ve also put together a great list of policy panels from our friends! 

If you have any panels you’d like us to add to this list, please email Sasha Moss: smoss@rstreet.org.

 

Congress may be more bipartisan than you think

shutterstock_389589580

At the Library of Congress’ Congress and History conference, political scientists James Curry and Frances Lee presented their working paper “Non-Party Government: Bipartisan lawmaking and theories of party power in congress.” In the paper, the authors examine the degree to which increases in polarization and the centralization of power in Congress have resulted in strictly partisan lawmaking.

In short, they want to know if the common characterization of Congress is accurate: in our current era of hyperpolarization and confrontational politics, do majorities in Congress skip bipartisan legislating and pass bills over the strong objections of the minority? Turns out – not so much. Curry and Lee “find that lawmaking today is not significantly more partisan than it was in the 1970s and 1980s.”

Such a conclusion is a bit counterintuitive, given seemingly constant claims that parties are unwilling and unable to work together. Both parties have accused the other of ramming legislation down the throats of the minority without even a semblance of compromise or debate. Democrats have most recently leveled that charge at the GOP’s maneuvers regarding the American Health Care Act.

The perception that majorities run roughshod over minorities is based on a couple observable characteristics of recent Congresses. First, institutionally, increased polarization has diminished overlaps in policy preferences between parties, theoretically decreasing the likelihood of reaching bipartisan agreements. Additionally, stronger, more cohesive political party organizations have developed, which have subsequently centralized power in leadership offices in order to facilitate partisan lawmaking. As articulated by the authors, “Members have provided their leaders a bevy of procedural and agenda-setting tools to structure the legislative process in ways that stand to benefit the majority party.” Among these tools is the bypassing of the traditional committee-driven legislative process in exchange for leadership-managed policy creation, and granting leadership a near monopoly in deciding what issues come up for a vote.

Both of these factors—polarization and more cohesive parties with centralized power—lead observers to hold two important expectations:

  1. Bills that are actually signed into law are likely to be passed without bipartisan support;
  2. The majority party is more effective at realizing their legislative agenda, in spite of the minority opposition.

Curry and Lee, however, show that both of these expectations are not supported by the data.

For their analysis, the authors compile all passage votes in both chambers for bills that became law in the 93rd-113th congresses (1973-2014). Additionally, Curry and Lee use a subset of bills identified as “landmark legislation” by fellow political scientist David Mayhew, to examine if these more significant bills received less bipartisan support due to their increased impact and salience.

A brief discussion of three key findings within the paper are below, all of which suggest that lawmaking in Congress still generally requires, and receives, bipartisan support.

download

Most laws, including landmark legislation, are passed with strong bipartisan support. The above figure shows the average percentage of minority-party support on all bills that became law during each congress from 1973 to 2014 in the House of Representatives. Contrary to expectations, the figure shows no clear trend line of decreased minority support. On all bills that became law during this period, more than 60 percent of minority lawmakers voted in favor of passage on average, and in many congresses more than 80 percent of the minority voted yes. In fact, in the most recent congresses where polarization is most intense, we find the percentage of minority support is even higher than in less-partisan congresses of previous decades.

On landmark laws we see more variation in minority support across congresses, but still find that, on average, more than 65 percent of minority lawmakers vote in favor of these laws. Only in two congresses, the 103rd and the 111th, does the percentage of minority support fall below 50 percent. Similar patterns are found in the Senate, though not discussed here. (Please see the linked paper for the data and analysis for the upper chamber.)

download 2

Only rarely does the majority pass laws over the opposition of a majority of the minority party. The above figure shows the percent of laws that were passed in the House despite a majority of the minority voting no – this is referred to as the minority getting rolled by the majority. On average, the minority roll rates were less than 15 percent for all laws passed during the period under study. In only a handful of congresses does the roll rate get above 25 percent, with the 103rd Congress showing the highest roll rate of more than 30 percent. Again, we see no upward trend in roll rates despite stronger parties and increased centralized power in leadership offices.

Roll rates are moderately higher in the House on landmark laws, particularly in more recent congresses. However, even on these major bills, the majority is rolled only about 30 percent of the time. Of notable exception are the 103rd and 111th congresses, where the minority was rolled on more than 70 percent of landmark laws.

In the Senate, there is more variation in roll rates across congresses, but on average, the minority is rolled on less than 15 percent of all laws. On landmark laws in the Senate, there is only a slight increase in roll rates, with 19 percent of major bills being passed with the majority of the minority voting no.

download 3

Despite increased majority party tools, congressional majorities do not pass a greater portion of their legislative agenda than congresses in less partisan eras. In addition to looking at levels of minority support on legislation and roll rates, Curry and Lee also assess the degree to which majorities are able to enact their legislative agendas. Because of the increased cohesion of parties and tools granted majority leaders, we would expect to find that majorities are more effective in realizing their policy goals. Instead, the authors find that “congressional majorities rarely are able to enact new laws addressing priority agenda items that achieve most of what they set out to achieve. Far more frequently, majorities achieve none of what they set out to achieve or just some of it.”

The figure above displays the percentage of majority party agenda items enacted from 1973 to 2017, and it categorizes majority success in accomplishing some, most, or none of their policy goals on prioritized issues. While there is notable variation in the majority party’s ability to implement its agenda, the majority was able to realize none of its legislative goals most frequently. Only rarely does the majority get most of what it wants on agenda items, particularly in more recent congresses. Even in congresses with unified party control, the majority struggles to get even some of what it’s after. Having congressional majorities—as Senate Majority Leader Mitch McConnell, R-Ky., and House Speaker Paul Ryan, R-Wis., could tell you—does not automatically translate to the majority dictating policy terms to the minority. Instead, it appears the majority must make concessions on policy goals to ensure passage.

In spite of stronger, more cohesive parties, as well as more powerful leaders with tools to execute partisan lawmaking, laws passed in Congress are mostly done with large percentages of the minority voting in the affirmative. Contrary to consistent claims of majority party dominance over the minority, laws, including landmark bills, are typically passed with majorities of both parties in support.

Here’s the bottom line, in the words of the authors:

After decades of partisan change and institutional evolution in Congress, lawmaking remains a process of bipartisan accommodation.


Image by Lightspring

 

Courts deal another blow to Obama climate legacy

shutterstock_491963644

Attempts by the Environmental Protection Agency to regulate greenhouse gases suffered another setback Tuesday, when a panel of the U.S. Court of Appeals for the D.C. Circuit invalidated an Obama-era EPA rule governing the use of hydrofluorocarbons (HFCs).

HFCs are a greenhouse gas. They’re less well-known than, say, carbon dioxide, but they still have a warming effect when present in the atmosphere, and the rapid rise of HFC emissions in recent years has been a growing cause of concern for policymakers.

Ironically, HFC use has been encouraged by EPA regulation, which authorizes manufacturers to use HFCs as a replacement for other substances that negatively affect the ozone layer. The regulation struck down this week was EPA’s belated attempt to walk back this legacy, telling companies to forget what it said previously, because HFCs are bad now.

The problem is that the statute EPA claimed gave it the authority to restrict HFCs is about restricting ozone-depleting substances. But as everyone (including the EPA) concedes, HFCs don’t deplete ozone. According to the court, since the EPA had already OK’d manufacturers using HFCs as replacements for actual ozone-depleting substances, it couldn’t use the law governing ozone to bootstrap regulation of HFCs.

All this is somewhat technical, but it raises a broader issue. The EPA’s HFC regulation is one example of a larger strategy adopted by the Obama administration and some in the environmental movement to circumvent Congress when it comes to climate change policies. Instead of working out a viable legislative solution that would deal with the problem, the administration looked for ways to commandeer existing statutory and regulatory provisions as a basis for limiting greenhouse gas emissions. Often, this involved stretching the meaning or purpose of particular provisions until they bore little resemblance to how they traditionally were used. The biggest example of this, of course, was the Clean Power Plan.

Now I can almost hear the shouting as I type these words. Obama had no choice! Republicans in Congress were obstructionists, and never would have passed anything. This overlooks that Democrats controlled the House of Representatives and had a filibuster-proof majority during the first years of Obama’s presidency and still couldn’t enact their climate plan, but let’s leave that aside. My point is this: whatever the rationale of trying to act on climate without Congress, recent events have shown that this is a very fragile strategy.

When the EPA stretches its authority to act without congressional sanction, it risks having its work undone by the courts. And even where an EPA action might survive judicial scrutiny, it is vulnerable to being revoked by a future EPA with a different political bent. What can be done without Congress probably can be undone without Congress. This week’s court decision is simply more evidence that any lasting action on climate is going to have to involve Congress.


Image by Evan El-Amin

Diverse voices unite to ask Congress not to gut Section 230

shutterstock_159849869

It’s hard to argue against a bill as unassailably titled as the Stop Enabling Sex Trafficking Act, introduced in the Senate last week as S.1693. The measure already enjoys broad bipartisan support and boasts 27 cosponsors.

However, in its effort to punish backdoor online sex traffickers, this legislation appears likely to have unintended damaging consequences wholly unrelated to the issue. Since its introduction, a large array of voices—including civil liberties groups, think tanks, startups and tech industry groups—have come out, despite obvious reputational risks, to point out ways the bill would be counterproductive and damaging to internet freedom.

The proposed legislation includes overly broad language that would modify Section 230 of the Communications Decency Act, which provides online platforms a limited liability shield for user-generated content. If revised, online platforms would be liable for the behavior of their users. Critics of the legislation agree that without these protections, America’s unique and innovative internet ecosystem will collapse.

As R Street wrote in a bipartisan coalition letter with other think tanks and civil-society organizations, this well-intentioned bill threatens to weaken the pillars of internet freedom. Human rights and civil liberties organizations have voiced concerns that the bill would lead to increased censorship across the web. Moreover, it would hinder existing voluntary incentives to stop trafficking and discourage platforms’ proactive efforts to address evidence of trafficking, for fear of being implicated and prosecuted.

Currently, online communications pass through multiple intermediaries—including web-hosting platforms, email providers, messaging services, search engines, online advertisers and more—all of whom depend on protection from misdirected legal threats. Without the protection of Section 230, each intermediary could face potential lawsuits based on the millions of videos, posts and pictures uploaded to their platforms every day. Many stakeholders have pointed out that it’s unlikely the bill will do anything to combat trafficking, but it will certainly invite trial lawyers to bring a deluge of frivolous lawsuits that target law-abiding platforms.

The Electronic Frontier Foundation cited Section 230 as “one of the most important laws protecting free expression online.” To clarify, Section 230 does not provide immunity and has never prevented intermediaries from facing federal criminal charges. The U.S. Justice Department has every right to pursue anyone who violates trafficking statutes on internet platforms without making any changes to existing law.

“If online intermediaries were held responsible for the actions of each and every user, the potential liability would be so massive that no reasonable person would start or invest in such a business,” the Consumer Technology Association stated.

A multitude of tech coalitions have also highlighted how the overly broad legislation would harm legitimate U.S. tech companies. Without the protection provided by Section 230, all internet platforms will be responsible for engaging in self-censorship and resource-intensive review to inspect all user-generated content. While some tech giants might be able to shoulder the cost, the burden undoubtedly would stifle development of smaller websites and startups. Law-abiding citizens would be left dealing with the repercussions, while bad agents could easily escape by moving abroad or changing their URL address.

Section 230 promotes positive legal behavior. The tech industry has been cooperative in the fight against trafficking, working closely with law enforcement to identify potential illegal activities. The Copia Institute and Engine Advocacy groups highlighted in their letter how the tech industry has created their own tools, combining cutting-edge technology and big data to eradicate trafficking in the online sphere. This bill could have a chilling effect on the industry’s relationship with law enforcement. Trade associations spanning the breadth of the U.S. media and technology industries have described how the measure would be counterproductive to those companies’ efforts to combat sex trafficking. Ultimately, it would create incentives not to filter proactively for evidence that might implicate companies in criminal lawsuits.

New legislation is not necessary to hold actors accountable for their participation in illegal activity. The internet is the product of user-generated content and the ramifications of a bill like this would be devastating.


Image by KreativKolors

 

Free to Brew: Alabama’s war on margaritas

Cameron Smith uncovered Alabama’s overzealous alcohol control board attempting to ban the sale of pitchers of margaritas to adults. He explains how his team was able to help pressure the nannies in Alabama to reverse their decision and let consenting adults voluntarily purchase pitchers of margaritas once again. He also talked about how people can replicate the success!

Why should conservatives care about urbanism and city development?

Jonathan Coppage, visiting senior fellow with the R Street Institute, where he researches urbanism and the built environment, joins host Gracy Olmstead on this episode of Federalist Radio. They discuss the ways that design can have impact on our communities and neighborhoods.

“Building a house to engage and to face the street is the first step of reviving a public space,” he said. “Having a public space that orients people towards it is not just part of good community foundation…it’s part of public safety.”

https://thefederalist.com/2017/07/31/conservatives-care-urbanism-city-development/

They discuss Jane Jacobs, Wendell Berry, and others who have written about the spaces in which we live.

What the budget process can tell us about the state of the Senate

shutterstock_496618810

Congress is running out of time to fund the federal government for the upcoming fiscal year that begins Oct. 1.

In July, the House of Representatives passed four appropriations bills bundled together in a so-called minibus. But senators chose to leave town for their August recess rather than take up that spending package.

And there won’t be much time to do so when they return in September. The Senate is currently scheduled to be in session for only 17 days next month. The House and Senate will be on the job at the same time for only 12 of those days.

That doesn’t leave a lot of time for the Senate to take up and debate the House-passed minibus, much less the other eight appropriation bills that have yet to be considered by the full House or Senate. A short-term continuing resolution to keep the government open while Congress finishes its work appears inevitable.

Often overlooked in reporting on this state-of-play is the fact that Congress has yet to pass a budget resolution for the fiscal year that begins the end of next month. This is significant, because the budget provides the framework in which the appropriations process unfolds. That is, it governs annual spending decisions in the House and Senate. As such, its consideration is meant to precede that of the appropriations bills.

But that rarely happens these days.

Instead, Congress routinely fails to pass a budget at all. For example, Congress passed only two budgets in the seven years since 2010. And only one of those (in 2016) can be thought of as a budget in any meaningful sense. Congress passed the other one (in 2017) simply to make it possible for Republicans in the House and Senate to repeal and replace Obamacare via reconciliation. Members were focused on the budget’s reconciliation instructions and not its top-line spending, revenue and debt numbers.

A recent paper from Brookings Institution Fellow Molly Reynolds and the Center for Effective Public Management tackles this phenomenon and, in the process, provides valuable insight into why the Senate has been reluctant to take up a budget in recent years.

According to Reynolds, two developments are to blame. First, the budget process has become a partisan exercise. This aligns with how we typically think about the resolution itself. That is, as a symbolic document reflecting the priorities and governing agenda of the majority party. It is also hard to imagine a policy area that generates a comparable degree of conflict on such a consistent basis, given the controversial nature of our budgetary politics today.

As a result, budget votes have become party-line affairs, where senators from one side of the aisle reflexively line up in opposition to those on the other. In this environment, members of the minority party rarely cross over to support the majority’s budget.

One consequence of this is that it is now harder for Senate majorities to pass a budget when they are divided. Achieving party unity is made even more difficult with the strict statutory limits placed on defense and nondefense discretionary spending by the Budget Control Act of 2011.

Given that Senate minorities cannot obstruct budget resolutions, this dynamic also provides insight into how we should expect the institution to operate if a majority uses the nuclear option to eliminate the legislative filibuster in the future. If recent experience with the budget is any guide, empowering a majority to pass measures in the Senate unencumbered by the minority will not necessarily guarantee a sudden burst of legislative productivity.

Reynolds also suggests that the Senate’s reluctance to consider the budget resolution may be driven by the broader breakdown in the institution’s decision-making process more generally. That is, members increasingly offer more floor amendments during the consideration of the budget because it represents one of the few instances when they know they will have the opportunity to do so.

Overall amendment activity in the Senate has declined. While the number of amendments that are filed to legislation considered on the floor has remained relatively consistent, the number of those amendments that are eventually offered (i.e., made pending) to bills has dropped considerably. The reason is that leaders from both parties have utilized a complex assortment of rules and practices to exert greater control over the Senate floor than at any point in the institution’s history. The principal means by which they establish such control is their ability to fill the amendment tree, or offer the maximum allowable number of amendments to legislation. No amendments are in order once all the extant branches on the tree are occupied. As a result, senators are blocked from offering their own amendments.

But it is harder for leaders to block amendments during the budget’s consideration because members can continue offering amendments during the so-called vote-a-rama period once all debate time on the resolution has expired. The budget thus offers members a relatively easy way to engage in credit-claiming and position-taking activities on the Senate floor.

In highlighting these problems, Reynolds underscores the various ways in which the contemporary budget process is in tension with itself. Acknowledging the trade-offs inherent in such contradictions is an important first step in designing reforms that can help reverse Congress’ current trend of not considering a budget.

Several of these reforms are reviewed in the paper, including setting an overall limit on the number of amendments a senator may offer during floor consideration and creating a cloture-like filing deadline for those amendments to give members more time to review them before having to cast their votes.

Another possibility is to revise the contents of a budget resolution to include more information to help rank-and-file members and their staff independently assess the budget. Currently, budget enforcement mechanisms are tied to committee allocations, but few members (and few staff outside of the leadership and budget committees) fully understand how those allocations relate to the functional categories in the budget resolution text. They are not publicly available until they are published in the conference report’s statement of managers at the end of the process. Requiring the budget’s major functional categories to be replaced in the text, or at least supplemented, with specific committee allocations for budget authority, outlays, contract authority (where appropriate), and revenues (where appropriate) would enhance senators’ ability to evaluate the impact of any amendments offered, as well as the underlying resolution itself, on their priorities for the upcoming year.

Reforms like these would certainly make it easier for members to weigh the merits of various amendments and the budget resolution itself. But Reynolds concludes with the astute observation that such changes may be insufficient, so long as senators are not able to offer amendments freely to other measures on the Senate floor. That is, the budget resolution and vote-a-rama are likely remain an outlet for pent-up member demand to participate in the legislative process without changes to how the Senate makes decisions more generally.


Image by nelzajama

 

Juvenile justice reform finally clears its U.S. Senate hurdle

shutterstock_405739084

The following post was co-authored by R Street Research Assistant Megha Bhattacharya.


It’s been 10 years since the expiration of the Juvenile Justice and Delinquency Prevention Act, which created America’s federal standards for the treatment of juvenile offenders. Efforts to reauthorize the legislation have failed repeatedly.

However, the Senate last week passed its version of the JJDPA reauthorization bill—S. 860, the Juvenile Justice and Delinquency Prevention Reauthorization Act—a development that gives hope to juvenile justice reform advocates across the country.

Previous reauthorization attempts faced significant hurdles. Sen. Tom Cotton, R-Ark., held the bill last year over an objection to phasing out of the valid court order (VCO) exception. VCOs allow state and local systems to detain youth for committing so-called “status offenses” like running away from home, truancy, underage smoking and curfew violations – things that wouldn’t be crimes but for the age of the perpetrator. The VCO exception, Cotton argued, grants state courts additional options when dealing with juvenile offenders.

But Cotton’s opposition prompted a hold from Sen. Rand Paul, R-Ky., who stated he would not support the bill without the phase-out. Ultimately at an impasse, last year’s negotiations ran out of time.

For reauthorization to be successful, both the House and Senate bills must be agreed upon in conference committee and then passed by both chambers of Congress. Leaders on the issue in the House released a statement shortly after news of S. 860’s passage, expressing commitment to crafting a final reauthorization bill alongside their Senate colleagues.

Senate Judiciary Committee Chairman Chuck Grassley, R-Iowa, and Sen. Sheldon Whitehouse, D-R.I., are leading the Senate effort. It is anticipated the bill will reach the president’s desk before the end of this congressional session.


Image by Air Images

Virgin Islands follow Puerto Rico into the debt day of reckoning

shutterstock_268872116

What do Puerto Rico and the U.S. Virgin Islands have in common?  They are both islands in the Caribbean, they are both territories of the United States and they are both broke.

Moreover, they both benefited (or so it seemed in the past) from a credit subsidy unwisely granted by the U.S. Congress: having their municipal bonds be triple-tax exempt everywhere in the country, something U.S. states and their component municipalities never get. This tax subsidy helped induce investors and savers imprudently to overlend to both territorial governments, to finance their ongoing annual deficits and thus to create the present and future financial pain of both.

Puerto Rico, said a Forbes article from earlier this year—as could be equally said of the Virgin Islands—“could still be merrily chugging along if investors hadn’t lost confidence and finally stopped lending.” Well, of course:  as long as the lenders foolishly keep making you new loans to pay the interest and the principal of the old ones, the day of reckoning does not yet arrive.

In other words, both of these insolvent territories experienced the Financial Law of Lending. This, as an old banker explained to me in the international lending crisis of the 1980s, is that there is no crisis as long as the lenders are merrily lending. The crisis arrives when they stop lending, as they inevitably do when the insolvency becomes glaring. Then everybody says how dumb they are for not having stopped sooner.

Adjusted for population size, the Virgin Islands’ debt burden is of the same scale as that of Puerto Rico. The Virgin Islands, according to Moody’s, has public debt of $2 billion, plus unfunded government pension liabilities of $2.6 billion, for a total $4.6 billion. The corresponding numbers for Puerto Rico are $74 billion and $48 billion, respectively, for a total $122 billion.

The population of the Virgin Islands is 106,000, while Puerto Rico’s is 3.4 million, or 32 times bigger. So we multiply the Virgin Islands obligations by 32 to see how they compare. This gives us a population-adjusted comparison of $64 billion in public debt, and unfunded pensions of $83 billion, for a total $147 billion. They are in the same league of disastrous debt burden.

What comes next?  The Virgin Islands will follow along Puerto Rico’s path of insolvency, financial crisis, ultimate reorganization of debt, required government budgetary reform and hoped for economic improvements.

A final similarity: The Virgin Islands’ economy, like that of Puerto Rico, is locked into a currency union with the United States from which, in my opinion, it should be allowed to escape. This would add external to the imperative internal adjustment, as the debt day of reckoning arrives.


Image by Peter Hermes Furian

 

Free-marketers, environmentalists both have reasons to hate the RFS

shutterstock_506450029

The Renewable Fuel Standard, created more than a decade ago, remains the source of strong divisions today. But as an Aug. 1 hearing of the Environmental Protection Agency showed, it also can be the source of rare bipartisan agreement, with experts from across the political spectrum testifying to the need to update and reform the RFS.

Under terms of the Energy Policy Act of 2005, the RFS “requires a certain volume of renewable fuel to replace or reduce the quantity of the petroleum-based transportation fuel, heating fuel, or jet fuel.” Two years later, the Energy Independence and Security Act of 2007 updated the RFS and set a projection for the volume of renewable fuels, particularly ethanol, that are mandated to be mixed into the nation’s fuel supply.

Under current projections, by 2022, 15 billion gallons of corn-based ethanol and 2.1 billion gallons of non-corn biofuel will be required in the nation’s fuel supply. While these numbers simply continue existing statutory requirements, both environmental and free-market groups have noted the updated volumes will have harmful effects on the fuel market and on car engines, as well as contributing to pollution from farm runoff.

Before the RFS was passed, oil companies already had been producing gasoline with a 10 percent blend of ethanol—what’s commonly called E10—as corn-based ethanol is largely cheaper than its counterparts derived from petroleum. However, the RFS mandates do not stop at E10. In the effort to “create a market” for advanced fuels, the RFS now calls for blending more ethanol into gasoline than consumers are willing to buy.

Most vehicles on the road can use E10 because it allows for the highest amount of ethanol and does not void vehicle warranties. But many car engines are not warrantied to use a higher ethanol blend, and if they do, it can cause severe damage and corrosion.

“We were pleased to see that the Environmental Protection Agency acknowledged ‘real constraints’ in the market, in terms of demand, infrastructure and production, toward accommodating higher blends of ethanol,” the National Taxpayers Union’s Nan Smith testified before the EPA. “If admitting you have a problem is the first step toward recovery, this and the slightly lower [renewable volume obligations] recommended in the 2018 proposal are good signs for taxpayers.”

Unfortunately, the RFS itself makes no consideration for the consequences faced by consumers. Due to the strict requirements built into the law, the EPA is unable to adjust the volume requirements downward in the face of lower-than-expected demand. This endless cycle leaves companies scrambling for ways to comply, rather than dedicating their energies toward real, market-driven innovation.

These market distortions alone would be reason enough to oppose RFS, but regrettably, it turns out the mandate is also damaging to the environment, particularly by encouraging the use of nitrogen-rich fertilizers used to grow corn. The need for more and more corn-based ethanol because of the RFS creates larger demand for corn and more pollution from its production. The runoff from large farms in the Midwest and Great Plains makes its way into the Mississippi River and has created a large dead zone in the Gulf of Mexico.

While environmental groups overall are split on the effectiveness of the RFS mandates, Friends of the Earth has opposed the standards because of the pollution they cause. “As it ignores the significant environmental damage created by runoff from biofuels production, the RFS will likely exacerbate the problem,” the group notes.

The RFS safeguards do require that biofuels meet a greenhouse gas emissions-reduction standard for each biofuel type. Ethanol made from corn must reduce greenhouse gas emissions by 20 percent; advanced biofuels must reduce greenhouse gas emissions by 50 percent; and cellulosic biofuel must reduce greenhouse gas emissions by 60 percent. These are good standards to have, but they have loopholes that cause the effort to fall short of its desired effect. As it stands, 15 billion gallons of corn ethanol are exempt from the safeguards. FOE adds that the EPA uses flawed data on the true impact of biofuels:

For example, the EPA uses a questionable analysis to predict that corn ethanol will produce less pollution than regular gasoline one day in the future, and then uses that analysis to excuse the use of extremely dirty corn ethanol today.

Rather than hold RFS volumes steady, the EPA should work with Congress to correct what is a fundamentally flawed statute, with the goal of creating an environment where market innovation is encouraged, rather than creating fake markets for industries with powerful lobbyists. As R Street’s Lori Sanders testified at the EPA’s recent RFS hearing:

Rather than continue down this failed path, we at R Street encourage the EPA to work with Congress to pass reforms that work. The federal government does have a role to play in creating an environment in which new fuels and technologies can take root in the marketplace and, in the process, reduce emissions and preserve the environment for generations of citizens to come. Sadly, the RFS does not fit the bill, and the new EPA should seek better solutions.


Image by Jonathan Weiss

 

U.S. steel requirements for pipelines undermines American energy, trade

shutterstock_390174634

The United States is a free-trading nation, regardless what President Donald Trump says on any given day. Any doubters about current U.S. trade policy should look no further than an Aug. 1 op-ed in The Wall Street Journal written by U.S. Commerce Secretary Wilbur Ross entitled “Free Trade is a Two-Way Street.”

The article and associated graph clearly show how much lower U.S. tariffs are for nearly all imported products from the European Union and China than visa-versa, with China being the bigger protectionist. The Trump administration is preparing to launch a major attack on China’s trade barriers, but the trade barrier proposals the president has made at home are deeply inconsistent with free trade in ways that undermine U.S. jobs and energy security.

In particular, the Commerce Department is expected to submit a proposal that would require domestic steel be used in all domestic pipelines, a proposal that could dramatically upend the ability of pipeline operators to source materials at a time of booming demand.

The United States has been in a pipeline boom this past decade thanks to the shale gas and tight oil booms, with roughly 20,000 miles of oil pipeline added since 2010 and more than 10,000 miles of natural gas pipeline added each year since 2008, according to the U.S. Transportation Department.

But few U.S. firms make the type of steel pipe used in large-line pipelines, and 77 percent of the steel used in line-pipe comes, one way or another, from foreign sources: particularly China, Japan, Turkey and South Korea.

According to ICF International, requiring domestic steel could add dramatically to pipeline costs, both in money and time, since disrupting the current international supply chain would cause shortages and possibly curtail future pipeline investments.

Depending only on U.S.-produced pipe “could lead to long construction delays and higher costs, potentially canceling planned pipeline project or blocking new projects,” wrote a group of oil and gas associations to the Chamber of Commerce back in April. Pipe operators cannot simply substitute other materials or products when constructing and repairing pipelines, ICF wrote.

Such restrictions on trade fly in the face of everything the U.S. energy space has learned since the marriage of hydraulic fracturing and horizontal drilling caused oil and gas development to explode forward around 2008.

Since that time, $1 trillion in capital—much of it foreign investment—has been raised and spent to boost the drilling and transportation of oil and gas from shale fields around the country. As we speak, the construction of five separate pipelines—the Atlantic Sunrise Pipeline, the Nexus Pipeline, the Dakota Access Oil Pipeline, the Rover Pipeline and the Mariner East II—are either complete or within months of completion, moving tens of millions of dollars of oil and gas to market every day using steel sourced from around the world.

Trump’s attention – some say, fixation—on the United States’ structural trade deficit and his proposals to solve it no doubt are among the reasons for his election. But it makes no sense to place trade restrictions of the energy supply chain when the product being produced, oil and gas, have much higher value and can have a dramatically greater impact on the country’s long-term health than demand for domestic steel pipe use.


Image by fuyu liu

 

Congressional Pit Stop: How legislative dysfunction deters young talent

paul-ryan-i-1024

Young people yearn to enact change and make their mark upon the world. Many of them, however, no longer see government as a viable arena in which to do so, in no small part due to congressional dysfunction.

Nurtured in a country constantly at war for most of my life, and thrust into maturity in the worst financial crisis in decades, my generation has grown a well-developed sense of political skepticism. Large swaths of young Americans no longer possess faith in political institutions and processes, and view the government as powerless to combat injustice or solve problems.

Yet without fail, throughout the school year, the University of Chicago Institute of Politics invites myriad political speakers to campus. From members of Congress to idealist activists, their message remains unanimous: There is an unmet need for a new generation of public servants.

Each summer, D.C. is inundated with an influx of young student interns and staffers looking to make a difference. And while Congress remains a powerful attraction, more people are pursuing options beyond the Hill: turning down competitive government internships in favor of more fulfilling private-sector opportunities. As someone who’s made this exact decision, I am a part of the problem. The decision should not come at a surprise when many congressional internships have become dreary positions filled with administrative work and little connected to professional development.

And while interning itself is a temporary commitment, the disinterest in long-term governmental work among young people is indicative of a larger problem among congressional staffers. Surrounded by high disapproval ratings, political gridlock and hyperpartisanship, the frustration within government is palpable, particularly among individuals my age. The decline of faith in political institutions, combined with a growth of opportunities to enact societal change outside of government, has led to millennials choosing private-sector missions in growing numbers.

Though Congress will have little trouble filling many of the staffing positions, a serious underlying issue remains: are positions being filled by the most qualified candidate? Feelings of pessimism make it hard to attract young people to serve Congress, and even harder to retain them. As a result, it is difficult to generate institutional growth if each new wave of public servants view their time in our national legislature as a steppingstone to other opportunities with more meaning.

Congress is supposed to be the foundation upon which the rest of the government edifice rests. It is the first branch, and was designed to be the driving force of policymaking, the repository of national powers and the channel of popular energy. Article I assigned Congress diverse and immense powers to govern so as to properly reflect property, people and political communities. Congress was once the bedrock institution but has fallen victim to its vices.

Established to make policy and respond to shifting social and economic needs, our national legislature is gridlocked by ideological strife. Because of this, Congress does not offer younger candidates an environment conducive to sustainable or meaningful growth. But more than that, the inability to govern signals a lack of congressional demand for the ready supply of ideas and talent – talent that therefore flows to workplaces off Capitol Hill.

While recent attention has been focused on President Donald Trump’s inability to fill high-level government positions, the bigger story is that decades of disinvestment in Congress have left rampant staffing problems within its daily structure. Legislative branch staffing has not grown proportionally with the expanding size of the government or the U.S. population, which has weakened the most democratic branch of government.

Experienced staff is a conceptual rarity. By the time congressional staffers gain high-level expertise, they’ve typically initiated the process of cycling out of the institution to pursue other prospects. The continuous influx of bright and energetic staff is not an ideal replacement for staffers with policy experience. Disinvesting in the legislative branch talent pool has led to a dependence on external resources—mainly, interest groups—which have smarts but inevitably have an agenda. The decay of institutional knowledge is hampering effective governance.

Congressional reform should focus on battling the external pressures and strengthening the crumbling institutional structures through an increase in motivated staff with a focus on retention. While social and political issues continue growing in complexity, Congress remains unable to address them properly. The government is responsible for processing more information than ever before, and is doing so with even fewer resources. Why should Congress continue to rely on private research, elite op-eds and corporate lobbyists when it can strengthen itself from within?

Young professionals are demoralized by the behavior of Washington officials, but their disengagement is rooted in frustration, not apathy. It is misinformed to fault millennials for remaining unengaged in the Hill when the government itself has repeatedly and publicly divested from young talent. However, without a clear solution, the dysfunctionality of Congress is condemned to further spiral. Instead, Congress should invest in creating long-term paths and educational opportunities to educate staffers continuously. This is what congressional internships should be about.

A job on the Hill should be more than a pit stop. But it won’t be anything but that until Congress reforms itself.

Alex Pollock on the Peak Prosperity podcast

Appearing on the the Peak Prosperity podcast, R Street Distinguished Senior Fellow Alex Pollock details his assessment of the Federal Reserves’s major transgressions against the interests of the general public. But perhaps more interestingly, he shares his observations from a recent hearing of the House Financial Services Committee on the same topic (at which he testified) and how it struck him that many members of Congress that convened it appear to be growing increasingly concerned about the Fed’s lack of accountability, as well as its potential fallibility.

For Harry Potter’s birthday, try on the federal affairs Sorting Hat

snek

Today is Harry Potter’s 37th birthday. In honor of The Boy Who Lived and savior of the wizarding world, we had some administration officials and members of Congress try on the Sorting Hat to determine which house of Hogwarts is their true home.

As a proud Slytherin, I’d like to remind everyone that this is all in good fun, and each of the four houses has its merits. (Even Hufflepuff; J.K. Rowling herself would have been one.)

Do you agree with our sorting? Who did we miss? Let us know in the comments or tweet to us at @RSI! And always remember:

snek

Gryffindor

Sen. Mike Lee, R-Utah

Energy Secretary Rick Perry

Rep. Darrell Issa, R-Calif.

Sen. John McCain, R-Ariz.

 

Hufflepuff

Education Secretary Betsy DeVos

Rep. Blake Farenthold, R-Texas

Housing and Urban Development Secretary Ben Carson

 

Slytherin

Sen. Al Franken, D-Minn.

Sen. Tom Cotton, R-Ark.

Rep. Justin Amash, R-Mich.

Sen. Ted Cruz, R-Texas

 

Ravenclaw

Rep. Jared Polis, D-Colo.

Rep. Bob Goodlatte, R-Va.

Sen. Ron Wyden, D-Ore.

Transportation Secretary Elaine Chao

Sen. Ben Sasse, R-Neb.

 

Whitehouse-Schatz carbon tax moves in right direction, but falls far short

shutterstock_73723585

Sens. Sheldon Whitehouse, D-R.I., and Brian Schatz, D-Hawaii, are serious about tackling the challenge of climate change and they’re out this year with another carbon proposal intended to be an “olive limb” to the right. As Whitehouse describes it:

Virtually every person on the Republican side who has thought the climate change problem through to a solution has come to the same place: price carbon emissions to encourage cleaner energy and return the revenue the American people.

That’s just what their new legislation intends to do. From 10,000 feet, it’s a promising start. The proposal imposes a tax on carbon emissions from fossil-fuel combustion and other major emitters; establishes a border adjustment to address concerns about competitiveness; and returns all the revenue, keeping none for the federal coffers.

The devil, however, is in the details. And that’s where the American Opportunity Climate Fee Act falls short.

First, there’s the revenues. We know from the literature that a revenue-neutral carbon price can boost economic growth, if revenues are devoted to cutting taxes to capital. Other ways of recycling the revenue—cutting payroll taxes, offering lump-sum rebates or reducing sales taxes—all pull the reins in on the economy. The Whitehouse-Schatz proposal spends the revenue several ways: reduces the top corporate income tax rate to 29 percent; offers a refundable tax credit to working Americans; offers additional payments to Social Security and veterans’ benefits recipients; and delivers $10 billion in annual block grants to the states.

The cuts to the corporate income tax rate are a good start, but insufficient. Any redesign of the corporate income tax should make the United States a more competitive place to do business; the Whitehouse-Schatz proposal would leave the United States with a tax rate that’s still 50 percent higher than the European average. That’s not exactly the ground-breaking shift we’re looking for.

Refundable tax credits to workers and additional payments to Social Security and veterans’ benefits recipients are intended to address the regressivity of a new tax on carbon. That’s a worthy goal; reducing greenhouse gas emissions shouldn’t increase the burden on those least able to pay. But the senators’ proposed structure creates a national constituency for something akin to a new entitlement. That constituency will support a tax just high enough to maintain annual payments and just low enough to not actually phase down the greenhouse gas emissions that support the new annual payment.

Lastly, the $10 billion in block grants is intended to fund individual states’ efforts to help those who can least afford to pay the new taxes on energy, or those whose industries are hardest hit, distributed on a per-capita basis. That creates a serious issue for the most rural states with the lowest populations – Alaska, the Dakotas, Montana and Wyoming. These states would also be disproportionately impacted; energy development is among the top five industries in Alaska, North Dakota and Wyoming.

Then there’s the matter of the tax itself. Whitehouse-Schatz would start at $49/ton of carbon dioxide in 2018, rising 2 percent above inflation year-over-year until an emissions target is attained. That’s a pretty high starting value: when the Congressional Budget Office modeled the Waxman-Markey cap-and-trade proposal in 2009, it estimated first year prices around $15/ton.

More troubling, however, is how the tax is applied. The good news: it’s designed to be administratively simple, capturing emissions at as few possible collection points and as accurately as possible. The bad news: in capitulating to environmentalists’ demands, it actually discourages industry best practices and safe and clean infrastructure. Whitehouse-Schatz requires that the tax be applied to, “greenhouse gases that escape throughout the fossil fuel supply chains.” It would not be applied at the points of emission, but rather an adjustment to the tax would applied equally to all producers and importers of fuel. Companies who utilize the best practices and the most advanced infrastructure with the fewest leaks will pay just as stiff a penalty as companies that wisely avoid investing in equipment from which they won’t benefit.

Finally, the Whitehouse-Schatz proposal doesn’t include any mechanism for regulatory preemption. The Environmental Protection Agency is obligated to regulate greenhouse gas emissions under the Clean Air Act, a mandate that created the faulty, expensive and ineffective Clean Power Plan. No tool within the CAA creates a proper framework for a regulatory solution. Even the Waxman-Markey cap-and-trade bill included provisions that would prevent the EPA from regulating carbon under certain provisions in the Clean Air Act. The senators, however, see this regulatory burden as a bargaining chip, not a problem to remedy.

For all its faults, the Whitehouse-Schatz proposal is promising in one respect: it demonstrates that motivated environmentalists know that market-based instruments can address the climate challenge effectively. An appropriately designed revenue-neutral carbon price can encourage economic growth, draw investment, boost innovation and achieve more emissions reductions at a lower cost than the regulatory machine. Toward that end, R Street has proposed that a carbon tax that would finance the outright elimination of the corporate income tax, a proposal we believe will unleash capital markets and boost employment while untethering economic growth from a carbon-based fuel supply.

Sen. Whitehouse is right – conservative solutions can work. The American Opportunity Climate Fee Act, however, is a far cry from conservative.


Image by visualdestination

The Hillsborough PTC is dead; long live the Hillsborough PTC

shutterstock_655488313

After years of tormenting ridesharing companies Uber and Lyft, as well as their customers, with burdensome regulations designed to prop up area taxi cab companies, the Hillsborough County Public Transportation Commission is set to be dissolved later this year by an act of the Legislature. Founded in 1987, the Hillsborough PTC regulates ground transportation companies such as cabs and limousines, as well as overseeing tow-truck companies in the Tampa Bay area.

Now that the Hillsborough PTC’s days are numbered, some of its remaining proponents warn that consumers will lack the kinds of protections that apparently only the PTC can provide. A recent local news report honed in on the PTC’s oversight of tow-truck companies as an example.

Indeed, tow-truck company activities can and should be regulated by local and state authorities. However, it does not take an entire government agency to do just that. In fact, the PTC was the only such local transportation board in the entire state of Florida. Other counties delegate ground transportation, towing and other such oversight and regulation to police departments, consumer-protection bureaus and other departmental offices within county government.

In Miami-Dade County, for example, tow-truck companies are regulated by the Department of Regulatory and Economic Resources, which also enforces consumer-protection measures like maximum towing rates, background checks on tow-truck operators, vehicle-safety standards, insurance requirements and other protections and remedies established by the Miami-Dade County Commission for consumers who have been towed. Orange County, which includes Orlando, has a consumer fraud unit that deals with all sorts of consumer-related issues, ranging from house repairs and construction to towing grievances.

Many municipalities also enact their own regulations that either work in harmony with the county’s or add additional layers to them. Florida state law also establishes basic guidelines. While towing is an industry inherently prone to angry customers, Florida’s is a relatively stable market.

The Hillsborough County Commission is currently exploring ways to distribute the PTC’s regulatory responsibilities across existing county agencies.  Tow-truck oversight, for example, is likely being transferred to the Sheriff’s Office. The commission is set to consider this and other staff recommendations related to the PTC’s impending dissolution at its next meeting Aug. 16.

Residents should praise the Legislature for dissolving an obsolete, unnecessary government agency that had been undermining competition and restricting transportation choice. However, county residents should remain vigilant of commission proceedings to ensure it preserves the rules and regulations the PTC enacted once upon a time—those that were reasonable and worked. This exercise should not used as an opportunity by local politicians, bureaucrats and entrenched interests to foist the kinds of unnecessary, burdensome regulations that led to the PTC’s dissolution in the first place.


Image by CrispyPork

 

Great ECPA expectations

shutterstock_387175249

When the Electronic Communications Privacy Act first was passed back in 1986, lawmakers mostly didn’t even imagine that email might play a central role in American life. Scarcely anyone in 1986—whether inside or outside of Congress—foresaw a day when we’d use the internet to help us find our misplaced phones and watches.

The digital landscape for Americans has vastly changed over the last three decades, but the central law spelling out when government needs to get a warrant to capture electronic communications has not. Because the internet is central to most of our lives, and because the potential scope of government intrusion on our lives has thus become vastly greater, it’s high time (or, really, past time) for Congress to update ECPA. That’s why we are pleased to see today’s introduction of the ECPA Modernization Act of 2017 by Sens. Mike Lee, R-Utah, and Patrick Leahy, D-Vt.

Congress is now poised to update the law in ways that reflect how pervasively we use digital communications and tools (computers, phones, watches, fitness trackers, and many other devices) in our everyday lives. The act aims to fix some serious flaws in the older law. The ECPA Modernization Act is not just about the content of digital communications; it’s also about the geolocation features (and other non-email, non-messaging features) that internet services increasingly offer us.

That’s not to say that the ECPA Modernization Act is perfect. It is a fundamental principle of liberal democracy that there should be limits on what government can grab from your digital world. These limits are essential to understanding the Fourth Amendment in the 21st century. Even as we see progress toward updating digital-privacy laws, it’s essential to point out that plenty of issues, such as the gathering and analysis of metadata, still need to be revisited and more thoroughly reviewed from a pro-privacy standpoint. (I’ve written about the underlying problems with ECPA’s inadequate protections for metadata here.)

And as Chris Calabrese of the Center for Democracy and Technology testified in 2015, the last time the Senate considered updating ECPA, the consequence of failing to update this creaky 1980s statute has been ambiguity and inconsistency. Is a Google Doc subject to the law if you’re only using Google Docs to store a document for later editing? Or, if it isn’t, does it become subject to ECPA provisions when you share the document for others to edit? Inquiring minds wanted to know.

This latest ECPA-revision language takes steps toward addressing both my concerns about metadata and Calabrese’s concerns about ambiguity. It adds warrant requirements for information stored in the cloud and for location information, as well as adding new limits on metadata collection. The ECPA Modernization Act may not be perfect (and what legislation is, really?), but it’s a good start, and it ought to serve as a good reminder that we shouldn’t wait another three decades—or even another three years—before we take another comprehensive look at how our individual privacy, and Fourth-Amendment-based limits on government snooping on citizens, should be updated for our fast-evolving digital landscape.


Image by Maksim Kabakou

 

Why city officials should welcome the autonomous revolution

shutterstock_649829878

The following post was co-authored by R Street Tech Policy Associate Caleb Watney. 


With tech and car companies racing to advance the state of self-driving car technology, the House Energy and Commerce Committee just gave the burgeoning industry a measure of regulatory certainty. Earlier today, the committee marked up and unanimously passed H.R. 3388, the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act, a draft version of which previously moved through the panel’s Digital Commerce and Consumer Protection Subcommittee.

The bill would reserve for states and localities the power to regulate their streets and the rules of the road, as is appropriate. But when it comes to regulating vehicle design, performance and safety standards, the federal government would continue to take the lead through the National Highway Traffic Safety Administration.

Even though that basic division of regulatory labor has been a successful model for 60 years, groups representing city and transportation departments, along with allied activists, are sounding an alarmist warning that the House bill would “preempt state and local governments from regulating their own streets.” A joint letter from the National Association of City Transportation Officials, National League of Cities, Transportation for America and the Natural Resources Defense Council proclaims:

The bill would allow autonomous vehicle companies to self-certify the safety of their vehicles without an independent reviewer, and would severely limit any government from protecting the well-being of its citizens. This is akin to trusting the fox to protect the hen house, and would clear the way for automakers and tech companies to deploy hundreds of thousands of automated vehicles without adhering to stringent safety standards.

In fact, traditionally operated vehicles aren’t subject to pre-market approval either, because that would be a slow and costly system without any concrete benefit. What’s more, the safety standards already in place for traditionally operated vehicles also would apply to autonomous vehicles under the committee’s bill, just as they do now. Manufacturers of autonomous vehicles must go through a lengthy regulatory process to receive exemption from any NHTSA safety standard and must justify each deviation by demonstrating that an exempted development provides an equivalent level of safety. Ultimately, if manufacturers fail to live up to the agreement they make during the exemption process, or if vehicles prove to be problematic in practice, NHTSA still would have full authority to take them off the road using their expansive recall authority.

The legislation thus leaves the federal government well-positioned to continue protecting the well-being of all Americans with regard to vehicle safety—autonomous or otherwise—just as it has been doing with human-piloted vehicles for decades. By raising the cap on exemptions, companies will be able to conduct much more rigorous testing and deploy autonomous technologies more quickly. By avoiding a patchwork of design, performance and safety standards promulgated by local governments, they will not be driven to “shop” for friendlier regulatory environments across state lines or be forced into the compliance nightmare presented by the development of 50 or more conflicting standards.

NACTO and allies rightly point out that local governments “have made great strides to manage traffic congestion, reduce emissions and air pollution, and improve safety and mobility for people accessing jobs and opportunities.” After decades when American street design and transportation planning lagged behind international standards, many localities are catching up by implementing effective road diets, narrowing lanes and making multi-modal accommodations. But this legislation does nothing to interfere with that fine work. In fact, it relieves city and transportation planners of responsibilities that are beyond both their budgets and their core competencies.

Nothing in the House legislation prevents state and local governments from continuing to enhance the safety of their streets through improved design and regulation. Autonomous vehicles, just like human-piloted vehicles, will be responsible for following “rules of the road”, including speed limits and rights of way. And in fact, testing thus far shows that autonomous vehicles promise to be far more compliant with road regulations than citizen drivers and provide dramatically better safety outcomes.

With more than 40,000 auto fatalities in 2016, 94 percent of which were due to human error, every day that autonomous vehicles aren’t on the road means lives are lost. No one knows the safety dangers posed by human-operated automobiles better than the transportation officials that NACTO represents. Those officials should welcome the addition of highly autonomous vehicles to the toolkit of advocates for street safety.


Image by Scharfsinn

 

How Congress can use evidence-based policymaking

The Legislative Branch Capacity Working Group examined the use of data and analyses in policymaking at the group’s July 17 meeting, including exploring the challenges Congress faces in attempting to implement evidence-based policymaking and how increasing congressional capacity could lead to more and better evidenced-based lawmaking.

Collectively, panelists Lucas Hitt of the Commission on Evidence-Based Policymaking, Andrew Reamer of George Washington University, Timothy Shaw of the Bipartisan Policy Center and R Street Vice President of Policy Kevin R. Kosar noted that Congress always has sought data and evidence to help it make policy, but legislators will disregard that evidence for at least a few reasons: values, distrust, and parochial and other pluralistic interests.

The Commission on Evidence-Based Policy to release its report this fall, which will advise Congress on how to increase the use of data and research in legislating and oversight.

Video of the panel is embedded below:

 

Moss on whether copyright is a property right

With Congress possibly set to consider new ideas on copyright, R Street Tech Policy Manager Sasha Moss participated in a recent panel convened by America’s Future Foundation to debate the constitutional and philosophical underpinnings of intellectual property and explore whether today’s copyright laws are excessive or not sufficiently protective enough. Alongside co-panelist Kristian Stout of the International Center for Law and Economics and moderator Jim Harper of the Competitive Enterprise Institute, Sasha observed that current U.S. copyright law is not in line with what the founders intended. Full video is embedded below:

Is the real estate double bubble back?

shutterstock_595098272

Average U.S. commercial real estate prices are now far over their 2007 bubble peak, about 22 percent higher than they were in the excesses of a decade ago, just before their last big crash. In inflation-adjusted terms, they are also well over their bubble peak, by about 6 percent.

In the wake of the bubble, the Federal Reserve set out to create renewed asset-price inflation. It certainly succeeded with commercial real estate – a sector often at the center of financial booms and busts.

Commercial real estate prices dropped like a rock after 2007, far more than did house prices, falling on average 40 percent to their trough in 2010. Since then, the asset price inflation has been dramatic: up more than 100 percent from the bottom. In inflation-adjusted terms, they are up 83 percent.

This remarkable price history is shown in Graph 1.

graf 1

Bank credit to commercial real estate has been notably expanding. It is up $238 billion, or 21 percent, since the end of 2013 to $1.35 trillion. It has grown in the last two years at more than 7 percent a year, which is twice the growth rate of nominal gross domestic product, although not up to the annual loan growth rate of more than 9 percent in the bubble years of 2000-2007.

The Federal Reserve also succeeded in promoting asset-price inflation in houses. U.S. average house prices are also back over their bubble peak—by about 2 percent, in this case. They have rebounded 41 percent from their 2012 trough. In inflation-adjusted terms, house prices a have climbed back to the level of 2004, when we were about two-thirds of the way into the bubble. See Graph 2.

graf 2

The rapid house price increases since 2012 have not been matched by growth in bank residential mortgage loans or aggregate mortgage credit. Banks’ total residential mortgage loans were $2.45 trillion in 2012 and $2.41 trillion in the first quarter of 2017. Total U.S. 1-4 family mortgages outstanding went from $10.04 trillion to $10.33 trillion in the same period. Thus, there is a marked difference between the two real estate markets, with commercial real estate having even more price inflation and more bank credit expansion than houses. The interest rate environment is, of course, the same for both.

House prices and commercial real estate prices are closely related. As shown in Graph 3, they made an obvious double bubble, a double collapse and a double big rebound. The statistical correlation between the two since 2001 is 86 percent.

graf 3

Is what we have now a new double bubble, or something else?  Considering where these charts may go from here, we may ponder three key questions:

  1. If interest rates go up 1 percent or 2 percent, what will happen to commercial real estate and house prices?
  2. If the Fed stopped being a big buyer of mortgage-backed securities and bonds, what would happen to interest rates?
  3. Having driven asset prices up, by buying and maintaining huge long positions, can the Fed get out of these positions without driving prices down?

We will know the answers when, sometime in the future, somebody explains it all to us ex post. For now, we know that real estate prices are back to the levels of the last bubble, reflecting the Federal Reserve’s production of asset-price inflation through its interest rate and bond market manipulations.


Image by Noah Wegryn

 

New DOJ asset-forfeiture rules trample basic rights

handcuffs-and-money-1462610092f8q

In a speech Monday to the National District Attorneys Association annual conference, Attorney General Jeff Sessions announced the U.S. Justice Department plans to ramp up the use of civil asset forfeiture to “combat crime.”

If this sounds like a cliché ripped from a 1980s political speech, that’s not far off. The truth is, the DOJ new effort has less to do with fighting crime than it does with funding for law enforcement.

Sadly, what Sessions actually is doing is green-lighting escalation of DOJ and local law-enforcement efforts to seize property from people who have never been convicted of a crime, thus allowing government agencies to reap major monetary rewards. To put it another way, if the government can’t convict you of a crime, they will just take your stuff instead.

One could argue the road to asset forfeiture was paved with good intentions. The practice re-emerged at the height of the 1980s drug war, when law-enforcement agencies across the country were trying to bring down the drug trade. Civil asset forfeiture programs gave government agencies the power to seize cash, cars, guns or anything else of value that was potentially bought with drug money. Suspected drug dealers would then be forced to prove in civil court that they obtained everything legally. Once seized, the cash and other items would be used to fund both federal and local agencies’ drug war efforts, creating something of a vicious circle.

Like any power the government is granted, the practice has been expanded massively, with the end result being blatant violations of Americans’ civil rights. This country was founded on the principles of property rights and protections from unreasonable government search and seizure. Indeed, we have drifted a long way from the inalienable rights outlined in our founding documents that all men are protected under the due process of law.

Unsurprisingly, asset forfeiture has become a cash cow for the federal government and a slush fund for local law-enforcement agencies across the country. Local agencies construct their budgets based on expected seizures, which has created incentives to seize assets just to keep the lights on. All in all, civil asset forfeiture is a $5 billion “industry.” The government has so perfected the art of seizure that they now outperform actual criminals. In 2014 alone, the government seized more assets than actual burglars did.

For a while, things had been looking up. During the Obama administration, the Justice Department took some real steps toward curbing civil asset forfeiture. More importantly, many states across the country started to take a stand by passing laws to make it tougher for the government to seize assets. As of today, according to the Institute for Justice, 13 states require a criminal conviction before the government can take someone’s property. However, these state-level reforms are about to become moot thanks to the Justice Department.

Along with increased interest in asset forfeiture, Sessions and the DOJ announced Wednesday that the DOJ will also reinstate “adoptive” forfeiture, which allows state and local agencies a workaround to any potential state laws by allowing them to use a federal statue to seize property. Not only is this a direct challenge to states’ rights, it also provides incentives for local agencies to continue to pursue these actions with little regard for civil liberties.

Few think criminals should profit from their crimes. There’s also no doubt that it is challenging for state and federal law enforcement agencies to investigate and prosecute complex criminal enterprises like drug cartels and human traffickers. But the current system violates some of the basic principles this nation was built upon—due process of law, innocent until proven guilty and freedom—all in the pursuit of innocent people’s property.


Image by hafakot

 

Using the CPP to boost coal is just as bad

shutterstock_663418213

President Donald Trump has spoken repeatedly of his support for coal mining, pledging publicly that “we will put our miners back to work.”

It probably should not be surprising, then, that the White House would give serious consideration to a pitch made by several coal-mining union representatives to the Office of Management and Budget that would see the Environmental Protection Agency rewrite the Obama administration’s Clean Power Plan in ways that help the coal industry.

Alas, the ends the industry wants to achieve using the CPP are at least as wrongheaded as the command-and-control model that was used to craft the emissions plan in the first place.

What the proposal by the AFL-CIO, the International Brotherhood of Electrical Workers and the Utility Workers Union of America recommends is for EPA Administrator Scott Pruitt to initiate only the first of the CPP’s four “building blocks.” Such a plan would reward coal-fired power plant if they improved their boiler heat-rate efficiency, even though the improvements could only cut greenhouse gas emissions by 2-3 percent, as opposed to the additional 10-12 percent the previous administration wanted to see.

The CPP’s other three building blocks—natural gas switching, renewable energy and energy-efficiency programs—would be eliminated, leaving a rump emissions plan that could pass muster in the courts.

Unlike the recent decision to exit the Paris Climate Accords, in which the United States simply said it wouldn’t follow through on a prior commitment, the Clean Power Plan’s regulation of existing power plants was finalized in June 2015. That makes it legally hazardous to jettison the plan, which remains before the U.S. Supreme Court, without a replacement. Only an unprecedented legal stay issued by the court in February 2016 – shortly before the death of Justice Antonin Scalia – kept the regulations from coming into force.

It’s worth remembering that the Clean Power Plan was the Obama administration grand attempt to regulate emissions from coal-fired power plants. The White House sought to expand the scope of the Clean Air Act beyond “the fence line” of power plants to cut state-level emissions coercively, whether states agreed to the federal actions or not.

But just because the revised rule wouldn’t be as powerful doesn’t mean it wouldn’t be just as damaging to the economy over the long run. Dictating winners and losers in energy markets is always a bad idea. This is as true of the bias against coal and nuclear energy shown by regulators during the second Obama term as it would be of this new proposal to upgrade coal-powered electricity plants to a point where they still won’t be as clean as a new natural gas-fired plant.

The natural gas fracking revolution– driven entirely by market forces and private property rights – has contributed to the 14 percent reduction in energy-related U.S. carbon emissions since 2005, leaving us roughly in the same position we were in the early 1990s. Leaving an ineffective regulatory structure in place of the original CPP may save the Trump administration a lot of time and effort, but it isn’t the principled approach to energy development this country needs in the 21st Century.


Image by 1968

Microsoft’s alternative power deal could be breakthrough for consumer choice

shutterstock_622022240

Washington state regulators approved a settlement last week between Microsoft Corp. and their monopoly utility, Puget Sound Energy Inc. (PSE), to enable Microsoft to buy its own wholesale energy or develop its own supply. The agreement represents a more cordial approach amid a widespread trend of large customers seeking alternative power suppliers, but underscores the inherent choice-constraining limitations of the monopoly model, even with favorable amendments.

The monopoly model, premised on a single power provider with captive customers, does not easily accommodate customer preferences. However, a glimmer of choice has emerged recently. Microsoft is just one of many corporate customers to pursue third-party purchases or direct-access policies that enable one-off customer choice within a monopoly footprint.

Spurred by less expensive alternative suppliers and corporate commitments to clean energy, corporations have procured more than 6 gigawatts of wind and solar in the last two years alone. In 2016, Microsoft and Amazon led the pack in corporate clean-energy procurement. Based on public commitments, this trend looks likely to continue, with the likes of Google, Apple, Johnson & Johnson and more committing to source all of their consumption from renewables.

At a time when climate and clean-energy policy too often reverts to a culture war, voluntary clean-energy procurement by corporate leaders marks a refreshing intersection of the conservative and green agendas. Bill Hogan, a Harvard professor and electricity markets expert, emphasizes that customers spending their own money to contract for green power is consistent with market principles. He clarifies that the “problem comes when governments spend other people’s money, using their power to mandate, that is a public policy concern.”

This may blossom into the new chapter of voluntary environmentalism, which has roots in the kinds of conventional pollution reduction (beyond legal requirements) that preceded today’s amplified climate discussion. For some companies, the reputational or branding benefits of contributing to a cleaner environment can provide substantial incentives. It appears those benefits are magnifying at the same time that the cost of renewables has fallen, spearheaded by merchant wind developers providing very competitive power purchase agreements.

Some have voiced concerns that an exodus of big customers from monopoly service may leave other customers with higher bills. A large customer’s departure could create stranded costs for the utility, which it will shift to other customers if permitted by regulators. To cover these costs, regulators may require customers seeking to leave the monopoly to pay exit fees. Companies like Microsoft might even go beyond the exit fee by pledging support for local community programs.

Proper exit fees can prove technically challenging to calculate. In addition, monopoly utilities often leverage those fees to impose a regulatory barrier to exit. In particular, they frequently will underplay the benefits to their remaining customers of the reduced costs and expanded opportunities to sell excess power.

Litigated exit fee cases have proven contentious and inefficient. In Nevada, numerous cases have led to prolonged regulatory battles and deterred some companies (e.g., Las Vegas Sands Corp.) from seeking to buy power on the open wholesale market. In a recent filing before the Nevada Public Utilities Commission (PUC), Wynn Las Vegas argued the exit fee imposed by the PUC—whose staff changed their methodology from the one applied to the previous exit request of the data storage company Switch—was unfair and discriminatory.

In fact, Switch incurred regulatory headaches of its own. The PUC rejected its initial proposal to switch to an alternative provider in 2015. Other Nevada resorts and casinos, including Caesars Entertainment Corp., are either considering or already have applied to leave monopoly service, with the MGM Grand agreeing to pay an $87 million exit fee.

Even with direct access, regulatory delays and inflated exit fees can serve as chronic limits to customer choice, not to mention that clinging to the monopoly model results in an underdeveloped market for alternative suppliers. Even the Microsoft settlement revealed differences between the customer and the utility over how to calculate the exit fees. In its initial testimony, Microsoft argued that its departure would provide a net benefit and estimated that, using generally accepted rate-setting standards, the utility would compensate Microsoft between $15 million and $35 million to leave (the two sides differed over the timeframe used to calculate the useful life of the utility’s assets and market value of excess generation).

However, in the end, Microsoft agreed to pay an inflated $24 million exit fee. The settlement represents a deal between numerous parties that is likely more efficient than prolonged litigation. Such a collaborative approach may serve as the preferred interim model in monopoly states (i.e., negotiated special contracts), short of a new customer tariff that would streamline the process.

Despite the niceties of settlements, such agreements retain undertones of the fundamental rift between increasingly heterogeneous customers and the choice-constraining monopoly model. In restructured or “retail choice” states, customers choose their power provider freely, and large customers often negotiate contract terms tailored to their unique profile.

Restructured states present a big advantage for corporate consumers, and policymakers increasingly have noted this advantage for retaining and attracting businesses. Enabling third-party service or direct access is certainly not the “end game” regulatory structure, but it offers a great incremental step to introduce customer choice, with benefits both for customers and for the environment.


Image by Katherine Welles

Jonathan Coppage all over your TV screen

cnbc-screenshot

Visiting Senior Fellow Jonathan Coppage’s recent Washington Post op-ed taking apart the alarmist coverage of a purported trend of millennials living at home as adults (tl;dr, it’s a normal thing, historically, and there’s a lot to recommend it in practice) drew quite a bit of attention, earning Jon invitations to sit down on a pair of national cable news shows. First, there was a two-part spot on CNBC’s Squawk Box:

Next, he was on CNN, discussing the piece with Smerconish host Michael Smerconish:

Why quality will trump quantity in the net-neutrality debate

Also appeared in: TechDirt

shutterstock_546896407

If you count just by numbers alone, net-neutrality activists have succeeded in their big July 12 push to get citizens to file comments with the Federal Communications Commission. As I write this, it looks as if 8 million or more comments have now been filed on FCC Chairman Ajit Pai’s proposal to roll back the expansive network-neutrality authority the commission asserted under its previous chairman in 2015.

There’s some debate, though, about whether the sheer number of comments—which are unprecedented not only for the FCC, but also for any other federal agency—is a thing that matters. I think they do, but not in any simple way. If you look at the legal framework under which the FCC is authorized to regulate, you see that the commission has an obligation to open its proposed rulemakings (or revisions or repeals of standing rules) for public comments. In the internet era, of course, this has meant enabling the public (and companies, public officials and other stakeholders) to file online. So naturally enough, given the comparative ease of filing comments online, controversial public issues are going to generate more and more public comments over time. Not impossibly, this FCC proceeding—centering as it does on our beloved public internet—marks a watershed moment, after which we’ll see increasing flurries of public participation on agency rulemakings.

Columbia University law professor Tim Wu—who may fairly be considered the architect of net neutrality, thanks to his having spent a decade and a half building his case for it—tweeted July 12 that it would be “undemocratic” if the commission ends up “ignoring” the (as of then) 6.8 million comments filed in the proceeding.

But a number of critics immediately pointed out, correctly, that the high volume of comments (presumed mostly to oppose Pai’s proposal) doesn’t entail that the commission bow to the will of any majority or plurality of the commenters.

I view the public comments as relevant, but not dispositive. I think Wu overreaches to suggest that ignoring the volume of comments is “undemocratic.” We should keep in mind that there is nothing inherently or deeply democratic about the regulatory process – at least at the FCC. (In fairness to Wu, he could also mean that the comments need to be read and weighed substantively, not merely be tallied and dismissed.)

But I happen to agree with Wu that the volume of comments is relevant to regulators, and that it ought to be. Chairman Pai (whose views on the FCC’s framing net neutrality as a Title II function predate the Trump administration) has made it clear, I think, that quantity is not quality with regard to comments. The purpose of saying this upfront (as the chairman did when announcing the proposal) is reasonably interpreted by Wu (and by me and others) as indicating he believes the commission is at liberty to regulate in a different way from what a majority (or plurality) of commenters might want. Pai is right to think this, I strongly believe.

But the chairman also has said he wants (and will consider more deeply) substantive comments, ideally based on economic analysis. This seems to me to identify an opportunity for net-neutrality advocates to muster their own economists to argue for keeping the current Open Internet Order or modifying it more to their liking. And, of course, it’s also an opportunity for opponents of the order to do the same.

But it’s important for commenters not to miss the forest for the trees. The volume of comments both in 2014 and this year (we can call this “the John Oliver Effect”) has in some sense put net-neutrality advocates in a bind. Certainly, if there were far fewer comments (in number alone) this year, it might be interpreted as showing declining public concern over net neutrality. Obviously, that’s not how things turned out. So the net-neutrality activists had to get similar or better numbers this year.

At the same time, advocates on all sides shouldn’t be blinded by the numbers game. Given that the chairman has said the sheer volume of comments won’t be enough to make the case for Title II authority (or other strong interventions) from the commission, it seems clear to me that while racking up a volume of comments is a necessary condition to be heard, it is not a sufficient condition to ensure the policy outcome you want.

Ultimately, what will matter most, if you want to persuade the commissioners one way or another on the net-neutrality proposal, is how substantive, relevant, thoughtful and persuasive your individual comments prove to be. My former boss at Public Knowledge, Gigi Sohn, a net-neutrality advocate who played a major role in crafting the FCC’s current Open Internet Order, has published helpful advice for anyone who wants to contribute to the debate. I think it ought to be required reading for anyone with a perspective to share on this or any other proposed federal regulation.

If you want to weigh in on net neutrality and the FCC’s role in implementing it—whether you’re for such regulation or against it, or if you think it can be improved—you should follow Sohn’s advice and file your original comments no later than Monday, July 17, or reply comments no later than Aug. 16. If you miss the first deadline, don’t panic—there’s plenty of scope to raise your issues in the reply period.

My own feeling is, if you truly care about the net-neutrality issue, the most “undemocratic” reaction would be to miss this opportunity to be heard.


Image by Inspiring

 

Alabama backs down on targeting margarita pitchers

shutterstock_3506070

In these hot summer months, nothing refreshes like a margarita. But in Alabama, the state Alcoholic Beverage Control Board had banned pitchers of this limey and refreshing libation. Seriously.

R Street’s Cameron Smith exposed the ban and advocated for its repeal in AL.com after a series of email exchanges with ABC representatives:

The Alabama Alcoholic Beverage Control Board (ABC) doesn’t want you wasting away in Margaritaville, so they’ve banned pitchers of the frozen concoction outright.

No, I’m not joking.

But we shouldn’t be surprised. This is the ABC that cracked down on people drinking while dining on the sidewalks in Mobile. It’s the same ABC that cut a deal to impose a 5 percent liquor mark-up to help the legislature and the governor enact a back-door tax hike.

Now the agency has taken to reminding licensees of its legal ‘interpretation’ that beer is the only alcoholic beverage that may be served in a pitcher…

ABC claimed it was concerned with the tequila in margarita pitchers “settling” over time, which could lead to situations where the first few drinks poured from the pitcher had less alcohol than the ones from the bottom of the pitcher. As Smith pointed out, this amounted to an argument that a group of legal adults “can’t figure out how to handle a pitcher of margaritas shared among them.”

Smith’s column generated enough outcry among Alabama residents that Dean Argo, ABC’s government relations manager, took to AL.com to announce that the board would no longer target margarita pitchers. In short, ABC has backed off, at least for now. (The Associated Press also covered the reversal).

While this was a clear win for margarita lovers across the state, Argo ominously suggested that the state may still draw a line between which types of drinks can be served in pitchers and which cannot. The dividing line would appear to be if the drink in question is “customarily” served in pitchers. So, margaritas and beer would seem to be safe, but what about less clear cases like mojitos? Mojitos are certainly served in pitchers sometimes, but is it “customary” to serve them that way? And how about bottled cocktails, which have become all the rage in the cocktail world? Are they a “pitcher,” and if so, are they “customary”?

The ABC’s decision to draw the line at what types of drinks are “customarily” put into pitchers is the type of ambiguous legal phrase that only a government lawyer could love. Call it “pitcher ambiguity,” and suffice it to say R Street’s team will be the first to blow the whistle if more pitcher shenanigans go down in Alabama.

Note: Cameron Smith has also been tracking and writing about the Alabama ABC’s attempt to enact a stealth tax increase by increasing the state liquor mark-up. Read more about that here.


Image by Danny E Hooks

 

Welcome to Climate Junior High

shutterstock_640678507

The new kid in the class is glib and loud, while the gal in charge of the “cool kids” pretended he hadn’t even entered the classroom. At least, that’s the way it seems from watching President Donald Trump and German Chancellor Angela Merkel in Hamburg last weekend at the Group of 20 (G-20) summit involving a majority of the world’s most industrialized countries.

In the weeks before the meeting, analysts and partisans were praying for some kind of moral reckoning for Trump on his arrival in Hamburg, the heart of Germany’s political left. Trump’s withdrawal from the Paris Climate Accords in early June had sent many European leaders into a state of shock, given that the European Union’s plan to cut its climate emissions dramatically is its pre-eminent geopolitical strategy.

Speaking before the German Parliament in late June, Merkel said of the U.S. withdrawal that “the climate treaty is irreversible and is not negotiable” – a direct rebuke of Trump’s decision to go it alone concerning climate change.

In other words, a beat-down in the lunchroom was expected.

Nevertheless, Trump and Merkel played nice in front of the dignitaries during the July 8-9 summit and the United States dissented from the 19 other countries’ consensus language on climate change in the final joint declaration with relative ease. The White House even was allowed to insert language saying the United States “will endeavor to work closely with other countries to help them access and use fossil fuels more cleanly and efficiently.”

The addition of the clean fossil fuel language was a “poker tell” to the radically divergent strategies at the heart of the chasm between United States and European Union on energy and climate policy. The problems undermining the Paris Accord—its voluntary and top-down nature, in particular—have been highlighted repeatedly by R Street and others. The facts of the case remain unchanged.

The United States, through the development of hydraulic fracturing and subsequent very low natural gas prices, has cut its energy-related carbon emissions more than any other member of the G-20 since 2005. The reason has nothing to do with international agreements or top-down approaches.

Instead, market forces drove natural gas drillers in the late 2000s to develop the hydraulic fracturing of shale basins in Pennsylvania and Texas. The explosion of natural gas supplies soon made it the fossil fuel of choice, over coal, for electricity plants around the country. The rest is history.

Since peaking in 2007, U.S. energy-related carbon emissions are down roughly 14 percent, while Germany, which sees itself as the world leader in climate change, had its carbon emissions fall 7 percent during the same period.

Given the size of the U.S. economy, the scale of the emissions savings has been enormous, with U.S. emissions falling 600 million metric tons compared to Germany 70 million tons over the same time period. All this while the European Union spent $1.2 trillion on wind, solar and bio-energy subsidies and an emissions trading scheme (ETS) that priced carbon too low to be effective.

Merkel waited until the very end of the summit to express her disdain: “Unfortunately – and I deplore this – the United States of America left the climate agreement,” she said in her closing statement.

As it stands, the differences in energy and climate outlook between the United States and Europe could not be wider. The United States looks to export both oil and natural gas into Europe. Meanwhile, both Germany and France are constraining both nuclear power and all fossil fuel use, as they aim for a dramatic cut in emissions by midcentury.

Perhaps French President Emmanuel Macron, who is also a new kid in the class, has a different plan to bring Trump into the climate club when he hosts Trump for Bastille Day celebrations in Paris July 14.


Image by Rawpixel.com

 

Kosar talks CRS reports on FedSoc podcast

In episode 2 of the Federal Society’s Necessary & Proper podcast, the R Street Institute’s Kevin Kosar discusses the Congressional Research Service, a nonpartisan government think tank in the Library of Congress. CRS assists Congress in lawmaking and oversight, and lamentably Congress has downsized the agency. CRS also has struggled to adapt to the hyper-partisan, Internet-connected Hill environment.

The full episode is embedded below:

South Miami solar mandate would trample property rights

shutterstock_497672149

Expanding solar energy to rely less on oil, gas and other nonrenewable resources is an almost universal goal, regardless of one’s political persuasion.  Indeed, with growing concerns about climate and the economic and even national security implications of relying on nonrenewable and oftentimes foreign energy sources, it makes sense to look at solar as a viable means to power more of society’s needs.

But as noble as the expansion of solar energy might be, its pursuit should never infringe on individual rights, as some local governments appear to be doing. For example, the City of South Miami is considering an ordinance that would require installation of solar panels on all newly constructed homes, as well as older homes whose owners elect to renovate 50 percent or more of the square footage.

Indeed, although the cost of solar-energy-generating devices has dropped in recent years, they still remain cost-prohibitive to most. This ordinance would not only increase the price of homes in a city where cost-of-living is already way above the national average, but may actually serve as a disincentive to existing homeowners who wish to make their older homes more energy efficient. Residents who might otherwise consider remodeling their homes with energy-efficient doors, windows, roof shingles, insulation and appliances may think twice if they were also forced to purchase expensive solar panels.

But even that is not the point.

This is a clear and egregious example of government trampling on individual property rights. Local and state authorities can and should develop building codes to ensure safety; Miami-Dade County already has a strict building code due to its vulnerability to hurricanes. However, residents should not be forced to purchase an expensive product that serves no health or safety purpose as a condition to develop or improve their own properties, just so politicians can feel good about themselves.

It is fair to debate how to expand solar-energy production and who should pay for it. Should government subsidize research? Should government grants or tax credits be offered to entice individuals to install solar panels? Should utility companies purchase excess power generated by privately owned solar devices?

These are all relevant public-policy issues that well-intended people with differing opinions can debate, and they all revolve around the notion that solar-device installation is a choice, not a mandate. Government should not pick one industry over another through subsidies or unfair incentives or penalties. Allowing energy producers to compete on a level playing field will encourage them to innovate and make their products more efficient and thus more economically viable over time.


Image by ND700

 

What’s in the FY2018 House legislative branch appropriation?

shutterstock_589919393

The House Appropriations Committee approved Fiscal Year 2018 appropriations via a June 29 voice vote. The bill calls for $3.58 billion of funding for House and joint-chamber operations (Senate-specific items are not included), a full $100 million more than the enacted FY2017 funding levels. It should, however, be noted that the FY2018 appropriation is much lower than the appropriation of FY2010.

On the same day, the committee released a full report explaining the appropriating rationale.

What is actually included in the bill? Who won and who lost the funding battles?

Big Winners

Security: In light of the recent shooting of Rep. Steve Scalise, R-La., staffer Zachary Barth, and Capitol Police officers Crystal Griner and David Bailey, the committee clearly saw a need to boost various forms of security for members and the government. The Capitol Police received an increase of $29 million, the House sergeant-at-arms budget was upped $5 million to $20.5 million, and $10 million of that was itemized to enhance the cybersecurity program of the chief administrative officer (CAO).

Architect of the Capitol (AOC): The stewards of the capitol complex, from building maintenance to landscaping, received a $48.4 million increase in funds over FY2017 enacted levels. The committee instructed the AOC to spend the appropriated $577.8 million on efforts that “promote the safety and health of workers and occupants, decrease the deferred maintenance backlog, and invest to achieve future energy savings.”

Library of Congress (LOC): For FY2018, the Library of Congress’ appropriations were increased $16.9 million to $648 million in an effort to modernize information technology and copyright efforts, as well as provide more funds ($3.5 million) to Congress’ nonpartisan think tank, the Congressional Research Service (CRS). Additionally, $29 million of the AOC’s appropriation was itemized for improvement and maintenance of LOC buildings and grounds.

Transparency: After years of debating the issue, the appropriators directed CRS to make all of its nonconfidential reports available to the public. The agency was given 90 days to submit an implementation plan, including cost estimates, to its oversight committees.

Big Losers

House Office Buildings: Despite the AOC receiving a sizable bump in appropriations, the amount allocated for the maintenance and care of the four House office buildings initially was chopped by $23.4 million from FY2017 levels. $4 million was later amended by voice vote, leaving the cut at $19.4 million.

Members Representational Allowance (MRA): The funding stream allowing members of the House to hire more and better compensate current congressional staffers remained at FY2017 levels ($562.6 million). “This level of funding will allow the MRAs to operate at authorized levels as approved by the Committee on House Administration,” declared the committee. What it will not do is reverse the long decline in congressional staff levels and salaries.

Government Accountability Office (GAO): Though not a decrease in funding levels, GAO was granted only a $450,000 bump in funds, despite a requested $46 million increase over FY2017 enacted levels. The agency requested the substantial increase for FY2018 chiefly for increased staffing in order to reduce the amount of improper governmental payments, identify ways to close the gap between taxes owed and taxes paid, and assist Congress in determining “policy implications of increasingly complex and rapidly evolving development of science and technology.” Instead of comparably larger increases enjoyed by sister agencies CRS ($3.5 million) and the Congressional Budget Office ($2 million), GAO’s appropriation remained relatively flat at $568 million.

Legislative Branch Appropriation Bill Specifics

Capitol Police: FY2018 funding levels increased $29 million to $422.5 million, including an increase of $7.5 million to “enhance Member protection, increased training, equipment and technology-related support items”; an increase of $13.2 million for Capitol Police buildings and grounds; and half-year funds to hire 48 additional sworn officers.

House Sergeant-at-Arms: An increase of $5 million with the “intent of enhancing security for Members when they are away from the Capitol complex. The Committee is aware that a specific plan is still evolving and once fully developed a plan will be presented to the Committee.”

Member’s Representational Allowance (MRA): though the MRA remains at FY2017 levels ($562.6 million), “the Committee has provided resources necessary to support the Committee on House Administration’s plan to increase Member’s Representational Allowance (MRA) by $25,000 per account this year for the purpose of providing Member security when away from the Capitol complex.”

Chief Administrative Officer (CAO): The CAO received an additional $10 million for strengthened cybersecurity measures. Additionally, the committee suggested that “with effective management of the program and continued support in appropriations, sufficient funding exists” to increase the number of two-year fellows partaking in the CAO’s Wounded Warrior Program from 54 to 85.

House Leadership Offices: FY2018 funding levels remained constant at $22.3 million.

House Committees: Appropriations for the salaries and expenses of House committees decreased by $45,004, from $150,324,377 in FY2017 to $150,279,373 for FY2018.

Joint Committees: The Joint Committee on Taxation received an increase of $360,000 to $10.46 million, while the Joint Economic Committee’s funding remained at $4.2 million.

Congressional Budget Office (CBO): Funding levels increased $2 million, from $46.5 million in FY2017 to $48.5 million for FY2018.

Architect of the Capitol (AOC): FY2018 funding levels increased $48 million to $578 million, including a $12.7 million increase for care and maintenance of the U.S. Capitol; $20 million increase in funding for the Capitol Power Plant; a $29 million increase for Library of Congress buildings and grounds; and a decrease of $27.4 million for House office buildings maintenance.

Congressional Research Service (CRS): Funding levels increased $3.5 million from $108 million in FY2017 to $111.5 million for FY2018.

Government Publishing Office (GPO): FY2018 funding levels remained constant at $117 million.

Office of Compliance: FY2018 funding levels remained flat at $3.6 million.

Amendments 

Two amendments to the FY2018 legislative branch appropriations bill were adopted by the Appropriations Committee.

  1. The manager’s amendment from Rep. Kevin Yoder, R-Kan., added $4 million to House office building maintenance. Instead of a decrease of $27.4 million, the amendment makes the decrease $23.4 million.
  2. Barbara Lee, D-Calif., sponsored an amendment that directed CAO to submit a report to committee within 90 days “addressing the ways in which Members and staff who have hiring and management responsibilities can be given the tools to combat unconscious bias in hiring and promotion, and with education on the negative impact of bias.”

Image by Golden Brown

 

States still stuck when it comes to pension plan fixes

shutterstock_545986114

I spoke recently with Bill Howell, the longtime speaker of Virginia’s House of Delegates. While he is not standing again for election, he is the kind of person who wants to use the last portion of his authority with the state government to work on the most important issues facing his state.

Number one on his list is pension reform. Nobody will be able to pin on him the consequences of inaction today or the failure of an unsustainable system over time. Making the choice to spend the last months of his time in office with a virtual shovel on his shoulder is leadership one doesn’t see much across the Potomac these days.

Other places will certainly provide awareness through “canary in the coal mine” warnings about the fiscal challenges of our retirement security system, but our political system and culture are generally less responsive to these kinds of virtually certain problems than they are to perceived future environmental hazards. As one example, due in large part to the one-child policy instituted in 1979, China is now contemplating the “4:2:1” situation of one grandchild in the workforce struggling to support two parents and four grandparents. For perspective, China is physically roughly the same size as the United States, with five times its population. That country alone is projecting a population over age 60 of more than 300 million people by 2024. The pressure on offspring to care for this number of elderly is mirrored in public programs.

Somewhere in my files is a page of dates in the not-so-distant future that represent each state’s technical bankruptcy, if something isn’t done in the meantime to alter the math. There is also Medicaid, of course, the budget issue du jour, but these dates are only based on pensions and state employee health care. In those jurisdictions where local governments participate in the state systems, their figures are included.

Pennsylvania is a good example of the political and financial pressures on governments to keep promises to their employees. Having barely celebrated passage of needed reforms a few days ago, there is already serious discussion of allowing the state to borrow the money it just required itself to put aside to fund the reforms.

Not even a month ago, Pennsylvania lawmakers enacted bipartisan legislation that required them to fully fund the employer (state) share of their defined contribution plan. When Gov. Tom Wolf signed the bill his public comment was: “Here in Harrisburg we can get important things done in a way that I think a lot of other places cannot.”

The new law provides that only hazardous-duty state employees, such as law enforcement, will stay eligible for the once-ubiquitous defined benefit plans that defined public pensions for decades but have been mostly phased out in the private sector. Both state and school employees who start jobs in 2019 will have three retirement options, and current employees will have to choose one, as well. Two of the new plans combine features of a guaranteed pension amount with an investment vehicle similar to private-sector plans. The third is a full defined contribution plan like a 401(k) plan, where the state pays 2 percent of salaries into the plan for school employees and 3.25 percent for other state workers to match their 7.5 percent minimum contributions.

Now there are rumblings that the state will authorize—as Illinois and other states with shaky financials have—sales of pension obligation bonds to lay against a portion of its share. It is theoretically possible to earn a rate of return on the bonds more than the pension contribution owed, but successes are few, and the risk to future workers and taxpayers accordingly great. Both Illinois and New Jersey have sold billions of dollars of pension obligation bonds. This year, 80 percent of the money paid out by Illinois for state teacher pension payments is going toward the unfunded liability. The state has never paid its full share, according to the Teacher’s Retirement System. Racking up long-term losses on these instruments, Illinois jacked up its income tax by 66 percent in 2011, and another 32 percent increase was over Gov. Bruce Rauner’s veto this past week. These are not unrelated stories.

New Jersey has suffered the indignity of being sued by federal regulators for securities fraud in its pension bond sales. The Garden State’s pension system was rated dead last among the 50 states in the most recent Pew Charitable Trust national study. State workers have been paying in higher amounts since 2011 reforms, but the state has not kept up its commitment. Ironically, the latest reform proposal for the worst-funded pension system among the states is to give it the billion-dollar lottery. This would increase the funded rate immediately to 65 percent – a dramatic improvement. If there is a better metaphor for a New Jersey solution, I don’t know what it would be. People in the Garden State will be encouraged to keep on gambling.

Pennsylvania should stay the course, and allow the reforms to nudge the retirement plans for state workers and teachers back toward stability.


Image by Aaban

 

Carbon tax versus clean tax cuts policy wonk rumble

Back in April, R Street Energy Policy Director Josiah Neeley moderated a panel at Earth Day Texas in Dallas. Billed as a “Policy Wonk Rumble,” the panel compared the merits of different ways to use the tax code to encourage clean energy and reduce greenhouse-gas emissions. Also featured on the panel were Peter Bryn of Citizens Climate Lobby, Travis Bradford of Columbia University, Rob Sisson of ConservAmerica, and Rod Richardson of the Grace Richardson Fund.

The future of aviation demands privatized air-traffic control

shutterstock_368317667

American air-traffic control is safe, but as currently constituted, the system won’t be able to keep up with the increasing demand for domestic and international air travel. To ensure Federal Aviation Administration can continue to modernize and operate efficiently, free of budget uncertainty and political interference, air-traffic control should be turned over to an independent nonprofit corporation, as proposed by H.R. 2997, the 21st Century Aviation Innovation, Reform, and Reauthorization Act.

From 1996 to 2012, the FAA’s budget doubled, even though staff levels stayed roughly constant and the agency’s productivity actually fell. A 2016 inspector-general’s report found that, of the system’s 15 most recent major system acquisitions, eight had gone over-budget by a total of $3.8 billion and eight were behind schedule by an average of more than four years. These sorts of problems illustrate the difficulties the FAA faces in adapting to new market conditions due to higher and more complex demand.

The 21st Century AIRR Act—sponsored by Rep. Bill Shuster, R-Pa., chairman of the House Transportation and Infrastructure Committee, which cleared the bill June 27 in a 32-25 vote—would assign oversight of America’s air-traffic control system to a new nonprofit corporation, with a CEO who is answerable to a board of directors made up of “a diverse cross-section of the aviation system’s stakeholders and users.” The act would refocus the FAA on federal safety oversight and streamline the FAA certification process, making it easier for companies to get their products out on time. This would encourage innovation in aviation technology by lowering the cost of implementation.

The proposal has support from President Donald Trump, who included a version of it in his proposed FY 2018 budget. As the National Taxpayers Union Foundation detailed in a recent piece, “the budget forecasts that taxes would be reduced by $115 billion from FY 2021 to FY 2027. The FAA’s budget for ATC would be reduced by $70 billion, leaving the agency to focus on regulating aviation safety.”

ATCBudget18Chart

But the measure also faces pushback from a variety of aviation interests. They prefer the Senate’s FAA reauthorization bill from Sen. Jon Thune, R-S.D., which does not include air-traffic control privatization. The Schuster proposal should be considered commonsense legislation, not only cutting government waste but making the world a little bit safer. Let’s hope it moves on the House floor soon.


Image by Stoyan Yotov

 

Private flood insurance should be allowed to compete on a level playing field

shutterstock_481251418

Since 1968, the National Flood Insurance Program (NFIP)—in a well-intentioned but ill-designed effort to help home and business owners in flood-prone regions—has provided flood insurance at below-market rates. Predictably, the program has racked up a significant amount of debt, discouraged private competition and innovation and distorted consumers’ ability to calculate the risk of living and building in flood-prone areas.

As Congress considers NFIP reauthorization this summer and fall, lawmakers ought to implement structural reforms that will benefit both insurance consumers and the American taxpayers.

It is a well-known economic adage that “if you subsidize something, you get more of it.” In this case, the NFIP’s practice of subsidizing insurance premiums for high-risk areas has created a moral hazard problem where the government insurance program actually encourages higher levels of risk-taking. This has turned out to be quite costly for the American taxpayer, as the NFIP is now over $25 billion in debt to the U.S. Treasury. The Government Accountability Office has found the program is unlikely ever to generate enough revenue to cover its costs, exposing the federal government to further financial risk.

Yet, the subsidies keep flowing to areas where floods are common, and where it may not otherwise be cost effective rebuild. There is no better evidence that the NFIP is encouraging risk than the fact that 25-30 percent of flood insurance claims in the NFIP system are generated by a mere 1 percent of properties that have government-backed insurance. This distortion of risk will continue to make the program fiscally unsustainable until the government ceases to offer insurance premiums at significantly below-market rates.

Unsurprisingly, regulations on what kinds of private market insurance lenders can accept, along with the subsidized rates, historically have made it difficult for insurance companies to offer competitive flood insurance plans. Private companies do not have the luxury losing $25 billion. Though previous reforms sought to level the playing field and move the NFIP toward risk-based rates, unclear language has continued to stymie private market development, limiting choice for consumers and putting taxpayers at continued risk. Among the issues that put private entities at a disadvantage is that NFIP policyholders who make the switch to private insurance are not considered to have continuous coverage, and therefore may have to pay significantly more should they ever decide to switch back.

Congress should look to Florida as an example of how to salvage a failing insurance system. Before state government enact reforms in 2010, Florida’s public insurance program, Citizens Property Insurance Corp., was fiscally unsound and the Florida taxpayers were exposed to high levels of risk in the event of another hurricane. State lawmakers incrementally raised premiums to be in line with the market rates and allowed private companies to assume many of the policies previously written by Citizens. As they did, the fiscal burden shifted from taxpayers to private entities.

The Flood Insurance Market Parity and Modernization Act, submitted in both the House and the Senate, would be an important first step to enable private market insurance to compete on a level playing field with government insurance. It would end clarify federal lending rules, allow insurers who participate in the NFIP’s Write Your Own program to also underwrite private flood insurance and end the practice of penalizing those who choose to purchase private coverage. It would also further the move toward a less distorted system and thus shift some of the burden off the taxpayers.

Despite passing the House unanimously in 2016, and passing the House Financial Services Committee unanimously last month as part of its package to reauthorize NFIP, the bill has not yet moved in the Senate. The Senate Banking Committee should take a lesson from their House colleagues and include this important clarification in their own legislation to reauthorize NFIP. Failing to do so would only ensure that, for many years to come, American homeowners will continue to be at the mercy of a failing government program, all on the taxpayer dime.


Image by humphery

Pollock before the Subcommittee on Monetary Policy and Trade

R Street Distinguished Senior Fellow Alex Pollock testifies before the House Financial Services Committee’s Subcommittee on Monetary Policy and Trade in a June 28 hearing on “The Federal Reserve’s Impact on Main Street, Retirees and Savings.”

Coppage at R Street-CNU event in Salt Lake City

The R Street Institute recently co-hosted an event in Salt Lake City with Utah chapter of Congress for the New Urbanism on how to make both housing affordability and strong communities possible in a red-state boom town like Utah’s capital. Alongside Sutherland Institute Director of Public Policy Derek Monson and Health Hansen, staffer to Sen. Mike Lee, R-Utah, R Street Visiting Fellow Jonathan Coppage reviewed the need to allow for small solutions to big problems, relegalizing accessory dwelling units and missing middle forms.

The downsides of using executive agency detailees

In a previous post, I recounted the advantages of using executive detailees as a means to combat staffing shortages on Capitol Hill. In short, agency detailees can serve as a free source of policy expertise to Congress, providing committees with experience and insight into agency decisionmaking and likely responses to congressional actions.

But, as with all governing arrangements, executive-branch detailees are not always an unalloyed good. Detailees, as some Hill veterans will explain, can come with costs.

  1. Detailees can have divided loyalties

Detailees can have a hard time shedding their agency allegiances, ultimately resulting in divided loyalties between their parent agency and their new congressional committee. These allegiances may be unconscious byproducts of spending a career in the executive branch.

Other agency employees, however, may have more deliberate congressional prejudices. Such detailees view Congress and its committees as institutions unfamiliar with the intricate inner-workings of their agency, and ones attempting to encroach on their expertise and operations with new laws and a constant barrage of oversight information requests. In these instances, detailees may struggle to work in support of the institutional interests of Congress.

  1. Detailees can have fixed policy preferences

Relatedly, borrowed agency employees may bring with them explicit policy preferences, often within specific issue areas they handled within their parent agency. Serving as a policy expert on a relevant committee may provide an opportunity to grind such a policy ax and, in turn, warp the policymaking processes within their new committee.

  1. Detailees often need training

Detailees are often unfamiliar with the legislative process and require basic training in congressional procedures once they get to the Hill. Given that committee resources are already severely strapped, providing such training further saps the time of permanent committee staff.

The time and resources spent bringing detailees up to speed on the ways of the Hill can result in a small return on the investment for Congress. What’s more, because detailees are loaned out for a limited time—often a year or less before returning to the executive branch—a constant cycle of orientation, training, working and departing can develop where very little time is spent on intricate policymaking.

  1. Detailees can mute the call to increase staffing capacity

A growing dependence on detailees as a means to compensate for decreasing congressional capacity may prompt some to argue that increasing the number of permanent congressional staff isn’t necessary. Detailees are seen by some as capacity Band-Aids covering up the more threatening conditions of limited expertise and too few staff in Congress. Increasing committee reliance on their use may perpetuate a situation of inadequate congressional staffing levels.

Agency detailees can be a source of policy expertise for congressional committees, but their contributions can’t be assumed. Detailees, themselves, can be a drain on the already limited capacity of Congress, and ultimately make Congress less effective, less productive and more susceptible to outside influence.

FDA misinterprets massive victory on teen smoking

shutterstock_311764157

As detailed this morning by the Food and Drug Administration, cigarette smoking by U.S. high school students has been cut in half since 2011—from 15.8 percent to 8.0 percent—a remarkable and previously unanticipated public health victory.

Unfortunately, it appears federal authorities may be misattributing the cause. In his announcement earlier today, FDA Commissioner Scott Gottlieb attributes most, if not all, of this reduction in smoking to a federally sponsored program that has only been in place since 2014. Despite substantial evidence in federally sponsored surveys in the United States and abroad showing that remarkable reductions in teen and adult smoking have been concurrent with the increasing popularity of e-cigarettes, the FDA announcement makes no reference to the possibility that much, if not most, of the recent reductions in teen smoking may be attributable to e-cigarettes.

In fact, Gottlieb urges continuing efforts to reduce teen use of all nonpharmaceutical nicotine delivery products, while endorsing expanded efforts at smoking cessation that rely on the pharmaceutical nicotine gums, patches and other products that have proved to be of only marginal effectiveness over the past four decades.

This public health victory is too important to leave to chance and guesswork. If Commissioner Gottlieb has evidence to support the claim that The Real Cost campaign “has already helped prevent nearly 350,000 kids from smoking cigarettes since it launched in 2014,” he should present it to the public. Regulators and public health authorities also should present and discuss the evidence for and against the possibility that the availability of e-cigarettes and related vapor products may, in fact, have played a major role in securing these reductions in smoking.

This is not an academic question.  Recently promulgated regulations from the Gottlieb’s own FDA threaten to eliminate more than 99 percent of e-cig products from the marketplace before the end of 2018, including all or almost all of the vape-shop component of this industry. The limited data available strongly suggests that the vape-shop products—with their ability to customize devices, flavors and strengths of nicotine to satisfy the preferences of each smoker, and modify the flavors and strength of nicotine over time to prevent relapse to cigarettes—may be more effective than the mass-market products in achieving and maintain reductions in smoking in both youth and adults.


Image by Sabphoto

 

Harm reduction is about making better choices, not perfect ones

shutterstock_300454181

Dr. Mark Boom, president and CEO of the Houston Methodist hospital system in Texas, suggests in a recent piece in The Hill that proponents of vaping are simply ignoring evidence that vapor products are not 100 percent safe.

Of course, people in the vaping community do not think that e-cigarettes are 100 percent safe. And if these products were found to increase the incidence of teen smoking of combustible cigarettes, we don’t want that either.

However, Boom appears to misunderstand the philosophy of harm reduction. Boom no doubt would encourage his patients who use intravenous drugs to, at the very least, use clean needles, rather than sharing. If he did not, he would be grossly abusing his privileged position as a healthcare authority. Similarly, applying a harm reduction philosophy by encouraging smokers to switch to e-cigarettes could save the vast majority of the 480,000 lives taken by combustible cigarettes every year.

As Boom rightly points out, e-cigarettes do, in fact, contain toxins. These are, however, at a very low concentration in the excipients – the products that make up the aerosol suspension that delivers the active ingredient of nicotine. What he neglects to add is that the excipients in nicotine liquid are strikingly similar to those in asthma inhalers. We certainly wouldn’t suggest to an asthma patient to forgo their medication because they are also inhaling toxins.

As a pharmacologist, I would encourage every person who ingests toxins to stop doing so. Of course I would. But my years in addiction research have made clear that you cannot simply tell someone to not pick up that cigarette, syringe or beer. Until that is possible, we have to encourage people to make better choices – which, unsurprisingly, is very easy to do.

When people do things we don’t approve of, we often write them off as not caring about their own health or personhood. Having worked at community organizations that distribute clean needles to curb transmission of infectious disease, naloxone to reverse overdoses and HIV drugs to prevent new infections, it is clear that people do recognize the risks they take everyday and embrace opportunities to reduce consequences associated with risky behaviors.


Image by Grey Carnation

 

Setting the record straight on copyright modernization

shutterstock_563308027

There’s a lot to be said for the adage that “we shouldn’t let the perfect be the enemy of the good.” While true in many situations, it also requires there be enough “good” to be worth the effort you’re engaged in, and not wasting energy better deployed doing something else.

In a recent blog post on Truth on the Market, Kristian Stout of the International Center for Law and Economics takes issue with my framing of a bill that would require the register of copyrights—the person who heads the Copyright Office within the Library of Congress—to be a presidential appointment. I should add the proposal comes during a time when President Donald Trump is considerably behind in selecting and confirming his appointees to a broad range of executive branch positions.

Unfortunately, Stout mischaracterizes and misreads my position. In my TechDirt piece, I described both points of view about the bill, writing that “opponents argue the bill will make the register and the Copyright Office more politicized and vulnerable to capture by special interests.” Stout takes this out of context and represents it as my position, rather than a description of what others have said.

There are a number of other issues with Stout’s piece, not all of which are worth addressing. But I will tackle the main ones.

It’s true, as Stout claims, that the idea for making the register a nominated and confirmed position has been under discussion for several years as part of the House Judiciary Committee’s copyright review, but so were a lot of other things that didn’t come to fruition. My point is not that this idea is totally new, but that the impetus for the bill to be rushed through now is motivated by the political dynamic between Congress and Librarian of Congress Carla Hayden, as well as her removal last year of then-Register of Copyrights Maria Pallante. Stout attests Hayden’s nomination was not politicized, when in fact, it was. The Heritage Foundation, among other conservative groups, argued against her confirmation. Heritage Action even urged senators to vote “no” on her nomination, a position with which we disagreed.

To set the record straight — I don’t think it’s a terrible bill. As I’ve argued in TechDirt and The Hill, there are some reasonable arguments in its favor. There are also some plausible arguments against it. I simply don’t think it does much to move the ball either way.

The main point of the bill, according to many of its proponents, would be to make the Copyright Office position more politically accountable. In theory, with congressional input, stakeholders on all sides would have an opportunity to weigh in on who gets confirmed for the position. This could limit edge cases where there is a truly awful candidate. But the Senate rarely, if ever, rejects presidential appointments who are otherwise broadly qualified — particularly for what is not a Cabinet-level position. And there wouldn’t be many groups capable of mounting a successful opposition fight over this position, as they might over a Supreme Court seat (even then, it’s rarely the primary factor). Even for Heritage, likely the most powerful conservative group in Washington, key-vote scoring against Hayden in a Republican-controlled Senate only got them 18 votes.

This, in itself, is not much of a justification for a bill.

One of the key points of Stout’s argument for the legislation is that: “Separating the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization.” But changing who appoints the register shouldn’t be conflated with separation or modernization. Indeed, the librarian of Congress still has final authority over all of the office’s substantive regulatory powers. Changing who picks the register also has nothing to do with meeting the challenges of modernizing the office’s information technology infrastructure. If an independent office is what you want, this bill isn’t that.

For the record, we at R Street are not necessarily opposed to an independent (or relocated) Copyright Office. Some scholars, including former Register Pallante, make a plausible case that the systemic bureaucracies of the Library are part of what’s holding the Copyright Office back. But it’s also hard to separate the Library’s well-documented IT problems from the decadeslong tenure of the previous librarian, James Billington. Additionally, there are IT modernization challenges at every level of the federal government, including independent agencies, and it may be worth giving the new librarian a chance to fix them.

At heart, the location of the Copyright Office is a complex question of public administration that is worthy of deep consideration and review. An immediate step I have suggested in conversations with colleagues is to have Congress ask the National Academy of Public Administration to conduct a review of the internal structural challenges of the Library and its component agencies (as it did for the PTO in 2005). This would inject a much-needed dose of objectivity into a discussion that has unfortunately served as another proxy battle between the entrenched sides of the intellectual property debate.

In his conclusion, Stout makes an excellent point: “Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates.” I agree. Regardless of how strong you think our nation’s copyright laws ought to be, you should be in favor of making the system’s core functions work better. This bill will do little, if anything, to advance that goal. I look forward to working with stakeholders on all sides, including Stout, to find solutions that do.


Image by Jirsak

 

PACE Act would prosecute teen sexting as kiddie porn

shutterstock_512002387

Crimes against children, particularly those that involve sexual exploitation, are beyond the pale. But while society needs to make sure it protects children from sexual abuse, recent legislation passed by the U.S. House could cause more problems than it solves – hurting minors, expanding minatory minimums and creating redundant federal authority where there already are similar laws at the state level.

By a 368-51 margin, the House voted May 25 to approve H.R. 1761, the Protecting Against Child Exploitation (PACE) Act of 2017. The bill is intended to strengthen federal laws dealing with the production and distribution of child pornography by making the transmission of sexual images of minors a federal crime. The measure has moved on to the upper chamber, where it will be considered by the Senate Judiciary Committee.

While the bill’s purpose is to punish child predators, its unintended consequence will be to create more criminals out of teenagers whose main crime is simply lacking common sense.

As written, the law could apply to minors who send sexual images to other minors, or what is commonly referred to as “sexting.” The House-passed bill provides no exemption or provision to deal with minors who engage in sexting, meaning they could be subject to a mandatory minimum sentence of 15 years in prison and lifetime registration as a sex offender. Because of how broadly the text is written, even a teenager who merely views a sexual image or requests that one be sent could be subject to the mandatory minimum.

Sexting among teenagers increasingly has become the norm. While the phenomenon is worth a larger discussion, most would agree that locking teenagers up for 15 years is not the best way to handle the situation. Few believe these minors are committing crimes on a par with actual child predators. They should not be treated the same way under the law.

Teenagers are still minors in the eyes of the court. By creating an inflexible law that cannot take into account the ages of those involved, the law will force the courts to punish minors for having poor judgment. For numerous other crimes, the court system is purposely designed differently when it comes to how and whether to prosecute and sentence minors. Judges are given more tools to keep them out of jail and without criminal records. By retaining local jurisdiction, communities could respond more effectively to offenders and victims, as well as to the community at large. Child pornography laws should protect children from terrible acts, not punish teenagers for lapses in judgement.

Such concerns could have been addressed in the PACE Act, were it not for pure laziness on the part of the House of Representatives. The bill was passed without any hearings or input from experts, and approved as members fled Washington for their Memorial Day recess. The American people deserve better that.

There is still hope that the Senate will take notice of these issues. Law enforcement at both the state and federal level already have multiple tools at their disposal to prosecute child predators. This expansion of federal power is nothing but Congress creating a solution to a problem that did not exist.


Image by nito

 

Dual-class shares and the shareholder empowerment movement

shutterstock_432826024

The shareholder empowerment movement has renewed its effort to eliminate, restrict or, at the very least, discourage use of dual-class share structures—that is, classes of common stock with unequal voting rights—in initial public offerings. Of particular interest to the movement, which is made up primarily of public pension funds and union-related funds that hold more than $3 billion in assets, was the recent Snap Inc. IPO that sold nonvoting stock to the public, a first for IPOs with dual-class shares.

Typically, a company will issue a class of common stock “ordinary shares” to the public that carry one vote per share, as Facebook Inc. did in its IPO, while reserving a separate “super-voting” class that provides founders like Marc Zuckerberg with at least 10 votes per share. This structure allows the founders to maintain control of the company without having to own the majority of outstanding common stock.

Even though it offered no voting rights in the shares sold to the public, the Snap IPO was a huge success. Snap priced its IPO at $17 per share, giving it a market valuation of roughly $24 billion. The book was more than 10 times oversubscribed and Snap could have priced the IPO at a price of up to $19 per share.

The Council of Institutional Investors, the trade organization that represents the shareholder empowerment movement, has asked the S&P Dow Jones Indices, MSCI Inc. and FTSE Russell to exclude Snap Inc. and other companies with nonvoting stock from their indices unless they include extremely restrictive provisions, such as maximum sunset provisions—triggers that would terminate the super-voting characteristics of the founders’ shares—of three to five years. Moreover, consistent with the CII’s general policy, the letters the council sent also advocate for a forced conversion of all dual-class share structures to one-share, one-vote, unless the majority of ordinary shares vote to extend the dual-class structures for a maximum of five years.

The movement’s advocacy is not confined to those IPOs with dual-class shares listed on the U.S. stock exchanges. It also is attempting to persuade the Singapore stock exchange not to allow dual-class share structures of any kind.

If the movement is successful, this shift would not be trivial, as many of our most valuable and dynamic companies have gone public by offering shares with unequal voting rights. Besides Snap and Facebook, other companies that have gone public with dual-class shares include Alphabet Inc. (Google); LinkedIn (acquired by Microsoft for $26 billion in 2016); Comcast; Zoetis Inc.; Nike, Inc.; and Alibaba Group Holding Ltd. Two of these companies, Alphabet and Facebook, rank in the top 10 in the world based on market valuation. Berkshire Hathaway Inc., a company that also uses a dual-class share structure, also ranks in the top 10, although it only started using the structure after Warren Buffet bought control of the company.

Public companies with dual-class share structures have an aggregate market value of close to $4 trillion. As reflected in their market valuations, they are some of our most important companies, helping to fuel the growth of the economy.

The movement’s vigorous response to Snap’s hugely successful IPO was unsurprising. The CII, since its founding in 1985, has promoted a “one-share, one-vote” policy as one of its bedrock principles. But this policy of “shareholder democracy” should not be confused with political democracy, where each person gets one vote. In shareholder democracy, voting power is assigned according to property ownership – i.e., how many shares the person or entity owns. Dual-class share structures clearly violate the CII’s policy of shareholder democracy and are an obvious threat to the movement’s power. That is, the more public companies that utilize a dual-class share structure, the more controlled companies exist and the less power the movement has.

Most importantly, the movement’s advocacy comes into strong conflict with what many believe to be the great strength of our system of corporate governance: the private ordering of corporate governance arrangements, with dual-class share structures being an optimal result of that ordering. Consistent with this understanding, NASDAQ Inc. recently declared:

One of America’s greatest strengths is that we are a magnet for entrepreneurship and innovation. Central to cultivating this strength is establishing multiple paths entrepreneurs can take to public markets. Each publicly-traded company should have flexibility to determine a class structure that is most appropriate and beneficial for them, so long as this structure is transparent and disclosed up front so that investors have complete visibility into the company. Dual class structures allow investors to invest side-by-side with innovators and high growth companies, enjoying the financial benefits of these companies’ success.

At its core, the shareholder empowerment movement advocates shifting corporate decision-making authority to shareholders, and thus away from boards of directors and executive management, the most informed loci of corporate authority. Shareholder empowerment, not maximizing shareholder wealth, is the movement’s objective. This movement must be stopped from opportunistically interfering with the use of dual-class share structures in IPOs.


Image by create jobs 51

 

Lehmann before the House Financial Services Committee

R Street Senior Fellow R.J. Lehmann testifies before a June 7 hearing of the House Financial Services Committee on “Flood Insurance Reform: A Taxpayer’s Perspective.”

How Congress became colonized by the imperial presidency

shutterstock_558924988

Ever since Arthur Schlesinger’s 1973 book coined the phrase, the so-called “imperial presidency” has been a perennial topic of our national political discourse. At a time when the American branches of government are separate but unequal, the seven essays collected in The Imperial Presidency and the Constitution trace when fears of an imperial presidency first arose, the extent to which such fears are justified and what can be done about it.

Adam J. White’s contribution, “The Administrative State and the Imperial Presidency,” cautions not to conflate the “imperial presidency” with the administrative state itself. As White points out, the administrative state is “first and foremost a creation of Congress,” and “to at least some extent, a necessary creation.”

By contrast, the imperial presidency refers to the power the president wields through his office. While this power can be channeled and enhanced through the apparatus of the administrative state, an imperial presidency also “can restrain the administrative state, as in the Reagan administration … and, less obviously, the administrative state can restrain an imperial president.”

In modern times, of course, the power of the presidency and the administrative state have grown in tandem. “The president wields executive power broadly to expand the administrative state, and the administrative state acts in service of the current president’s agenda,” White writes.

After various failed attempts by Congress itself to act as an administrative body during the Articles of Confederation era, the U.S. Constitution provided for an energetic executive, which Alexander Hamilton described as “essential to the steady administration of the laws.” Despite this, the Constitution offered little in the way of an affirmative vision of the administrative bureaucracy, an omission some scholars have referred to as “the hole in the Constitution.”

Although there were earlier antecedents, Congress’ creation of the Interstate Commerce Commission in 1887 marked the modern administrative state’s arrival. Ove time, the ICC’s powers were enhanced by Congress to encompass both judicial and legislative powers, given its ability to both set rates and adjudicate disputes. During the Progressive Era and through the New Deal, more administrative agencies were built on the ICC model, including the Federal Trade Commission and Federal Communications Commission.

Importantly, these agencies were distinct from the traditional executive branch departments and thus operated “outside of the direct oversight of the president,” White notes. Progressive policymakers—starting with some in the Franklin Roosevelt administration—quickly grew frustrated with the agencies’ ability to “impede an energetic liberal president’s regulatory agenda.”

Years later, conservatives also began to bemoan the independent nature of certain agencies. As the Reagan administration sought to cut back on the regulatory state, it attempted to increase the president’s power over the administrative state through mechanisms such as centralized regulatory review under the Office of Information and Regulatory Affairs. Since Reagan, presidents of both parties increasingly have embraced greater presidential control over federal agencies. Some used that control to expand the administrative state’s power, while others have sought to curtail it.

The “most straightforward” way to shrink the administrative state, White argues, “would be for Congress to do the work of taking delegated powers away from the agencies, by amending statutes.” Since many legislators prefer to delegate their power in an effort to avoid responsibility, White views this option as unrealistic.

This leads White to the “second best option,” which is to pass some form of broad regulatory reform legislation that revamps the processes through which agencies enact rules. He mentions the REINS Act and the Regulatory Accountability Act as two possible options. R Street actually has identified a whole menu of options from which Congress feasibly could choose.

More broadly, White points out that using the imperial presidency as a means to control and direct the administrative state is no longer an effective mechanism to rein it in. Rather, it’s far past time that the other branches assert themselves and join the fray. One possibility is for the judicial branch to revisit its doctrines that grant significant deference to federal agencies.

In many ways, Andrew Rudalevige’s contribution, “Constitutional Structure, Political History and the Invisible Congress,” picks up where White’s essay leaves off. When the system of separated powers works as intended, the legislative and executive branches operate as “rivals for power,” making their relationship contentious, rather than cooperative. Although the Founding Fathers were more concerned about the legislature accreting power than the executive, Rudalevige’s chapter retraces how both structural and political factors have created the exact opposite dynamic.

Rudalevige lays out an obvious—but often underappreciated—truth: the president has a built-in advantage in that he is just a single person. By contrast, Congress must function as a 535-member conglomeration of legislators spread across two different chambers and hailing from different political parties and geographical regions. Given that each member carries “their own localized electoral incentives,” they will “rarely find it in their interests to work together, much less to confront the executive branch.”

Another factor Rudalevige pinpoints for Congress’ decline is the rise of political polarization. Politics has increasingly become a team sport: “A vote against presidential overreach is now seen by the president’s party colleagues as damaging to the party brand, and thus to their own careers.” The result is that legislators are more likely to toe the party line in pursuit of short-term policy victories, rather than vote to strengthen Congress as an institution.

Rudalevige also highlights how modern travel has allowed congressmen to transit back-and-forth from their home districts to Washington with relative ease. This has led to the rise of the “Tuesday-Thursday club of drop-in legislators,” who spend more time pressing the flesh with donors and constituents back home than doing the hard work of hammering out legislative compromises. One option is for Congress to extend its work weeks, which could increase the amount of floor time available to conduct legislative business.

Exercising more effective oversight doesn’t just mean finding more time; it also requires more capacity. Rudalevige cites R Street’s Kevin Kosar, who has chronicled the decline in congressional staff and pay levels over the past 40 years. Beefing up congressional staff, as well as support systems like the Congressional Research Service, would help address this deficiency.

Other possibilities include forming new institutions such as a Congressional Regulation Office—as proposed by Kosar and the Brookings Institution’s Phillip Wallach—to provide independent cost-benefit analyses and retrospective reviews of regulations. A final idea—and one long advocated by policy wonks—is a return to “regular order” budgeting, in which Congress breaks the federal budget into bite-sized pieces rather than relying on last-second, thousand-page omnibus spending bills to keep the government’s lights on.

While all of these ideas are available and ready for the picking, Rudalevige admits that “current returns are unpromising” that Congress will actually implement any of them. Nonetheless, he’s correct in warning that “the matter demands our attention even so.” Let’s hope Congress—and the American citizenry—heeds his call.


Image by Ed-Ni Photo

 

How executive ‘detailees’ could help ease Congress’ staffing problems

Capitol Building

It is becoming more widely acknowledged that Congress has a staffing problem. While the executive branch employs more than 4 million people, the legislative branch has only about 30,000. This number includes personnel toiling for agencies that do not readily come to mind as legislative, like the Government Publishing Office, the Architect of the Capitol and the U.S. Capitol Police.

While congressional capacity advocates shout for more funding and personnel to be allocated to the legislative branch, political scientists Russell Mills and Jennifer Selin examine the use of an often-overlooked stream of expertise available to congressional committees: federal agency detailees. Detailees are executive agency personnel with a particular policy mastery who are temporarily loaned out to congressional committees. The typical detailee assignment runs one year.

Hill operators and observers have long known policy expertise resides primarily in congressional committee staff. Compared to House and Senate personal office aides, committee staffers typically have more experience and narrower portfolios, both of which enhance the abilities of committees and their members to conduct oversight, draft legislation and develop fruitful lines of communication with relevant agency stakeholders.

However, as Mills and Selin point out in a recent piece in Legislative Studies Quarterly, there are only about half as many committee staff as there were in 1980, while inflation-adjusted pay levels have fallen 20 percent for many committee aides. This reduction in resources has hampered committees’ oversight capabilities, in addition to abetting the centralization of policymaking in leadership offices or its complete delegation to the executive branch.

House versus Senate committee staff, 1977-2014

house v senate staff

SOURCE: Russell Mills and Jennifer Selin, 2017

Mills and Selin argue detailees offer at least three specific benefits to supplement Congress’ legislative and oversight responsibilities:

  1. Detailees provide additional legislative support. Though committee staffers are usually issue specialists, “detailees often have specialized, expert knowledge of a policy, [and] they are able to provide awareness more traditional congressional staff may not have.” Moreover, given their personal experience within the agencies, detailees offer committees important insight into the decision-making processes and likely agency responses to potential congressional action.
  1. Detailees assist with executive branch oversight. “The process for securing information through requests directly to a federal agency is slower and involves agency coordination with the presidential administration. Detailees provide a way around these problems.” Simply having agency contacts and being able to connect committee staffers directly to those agency personnel most likely to respond quickly with accurate information can expedite the frustratingly slow information-gathering process vital to conducting effective congressional oversight.
  1. Detailees supplement interest-group engagement. In developing policy, committee staffers spend much of their time meeting with relevant policy stakeholders. “Committee staff routinely assists members of Congress by meeting with interest groups to gather their input for legislative initiatives as well as to hear their objections or support for actions taken by executive agencies.” Detailees provide the committee more, and different, stakeholder contacts established from the agency perspective, which allows for better information filtering and a more informed assessment of legislative potential.

Finally, and importantly, Mills and Selin point out that use of detailees is a rare win-win for both the legislative and the executive branches. The benefits to Congress are clear: committees gain expert-level staffers with experience and connections to the agencies under the committee’s purview, all on the agencies’ dime. Sen. Susan Collins, R-Maine, has noted:

These detailees apply their expertise in researching issues, staffing hearings, and working on legislation. In return, they gain valuable experience, which develops their careers and benefits their agencies.

The gains for the executive branch are less intuitive. After all, the agency loses a competent staffer who then offers Congress firsthand insight into agency operations, even potentially providing increased oversight to the very agency from which the staffer originated.

But Mills and Selin note that, from qualitative interviews they conducted with current and former detailees, they discovered that “detailees gain experience in the legislative process, can represent the interests and perspectives of the agency, and give the agency a conduit to committee decision making.”

In other words, just as detailees provide insider information to committees on agency operations, agencies profit from their detailees returning to the agency with intelligence on committee decision-making, policymaking and oversight capabilities. All of which our personnel-strapped national legislature badly needs.

Five years of R Street

shutterstock_553557937

Five years ago today, Deborah Bailin, Christian Cámara, Julie Drenner, R.J. Lehmann, Alan Smith and I resigned our jobs at the Heartland Institute over a horrifically ill-advised billboard advertisement and began a new think tank called R Street. Tonight, we’ll celebrate our fifth anniversary.

We’re now almost 40 strong and have a budget about 10 times that of our first year. In honor of our anniversary, here are five bits of trivia about R Street that I like to share:

  1. R Street’s first hire was Erica Schoder, now our senior vice president for operations. Our first office, previously the Heartland Institute’s Washington office, was a converted art gallery above a vintage clothing store.
  2. Some other names we considered were the Metis Institute (after the Greek goddess of common sense) and JuneFirst (after the day we officially opened). Our offices were near R Street and R is the first mostly residential street off Connecticut Avenue, which is arguably the main street in Washington. So it’s the place where real life begins in the nation’s capital.
  3. One huge advantage of the name R Street was that we could get the short URL org. That’s actually a big deal. It makes our email addresses much easier to type. Many other think tanks that have started recently have long and unwieldly URLs. We don’t.
  4. To my knowledge, we remain the only right-of-center think tank that both reimburses bike sharing and maintains a gender-identify nondiscrimination policy. I’m a cyclist and support civil rights protections for the gender nonconforming. But I’d argue that both policies are simply grounded in common sense.
  5. We believe that pirates are much cooler than ninjas. By a lot.

Image by Africa Studio

 

Reports of the taxi industry’s death have been greatly exaggerated

shutterstock_403179772

Co-written with Jonathan Haggerty 

It seems like nearly every time ridesharing is brought up in New York City, someone will inevitably bring up the dramatic decline in taxi medallion prices. Dubbed the “Uber effect” by American Enterprise Institute scholar Mark Perry, the theory is that increased competition from companies like Uber and Lyft has eroded the legal monopoly that taxi medallion holders previously exerted in the on-demand automobile transport market.

By competing against this once isolated market, transportation network companies like Uber and Lyft have made these medallions significantly less valuable. One proxy for this decline can be found in share prices of Medallion Financial Corp., a publicly traded consumer and commercial lending firm that is a major creditor in the taxi medallion lending business. When looking at the period from 2013 to 2016, the decline certainly looks precipitous:

caleb1

This may not be the complete story, however. After all, the stock price may vary depending both on the specific quality of loans the company issues, its underlying cost of capital and on general market confidence. Furthermore, the stock price doesn’t make any distinction across the numerous categories of medallion ownership.

To the extent that news reports cite changes in the actual market value of a medallion, they usually do so anecdotally, comparing the peak value in 2014 of more than $1 million to the current trough of under $300,000.

Given the clamor and potential policy implications, a more detailed analysis seemed appropriate. We examined medallion price trends over time and differentiated across the different medallion categories. NYC’s Taxi and Limousine Commission compiles monthly records of medallion transactions for each of six categories: Individual unrestricted, handicap accessible and fuel alternative, as well as corporate (minifleet) unrestricted, accessible and fuel alternative. Unrestricted cabs are the general purpose yellow taxis that everyone thinks of, handicap accessible are cabs specially retrofitted to allow persons with disabilities easier access, and fuel alternative cabs have specific fuel requirements titled towards being more environmentally friendly.

The primary breakdown is between individual and minifleet. Where an individual medallion owner has to spend a minimum number of hours per year (usually the equivalent of 200 separate nine-hour shifts) driving the cab, a minifleet owner can lease out taxis to other drivers.

By far, the largest categories are the individual and minifleet unlimited licenses, and the general decline here tracks fairly well with Medallion Financial’s stock price:

caleb2

Immediately we can see that there is a clear and substantial price premium for minifleet licenses over individual licenses. This makes sense intuitively. A license with strict personal driving requirements is going to be more restrictive on your time, and less valuable, than one without. Another factor that stands out is how messy the data is, with transfers at price points both significantly cheaper and significantly more expensive than the average in any given month. Unfortunately, it’s difficult to tell whether this was an issue with the NYC taxi commission’s data recording or whether these were due to external factors, like family transfer discounts or business liquidations.

However, it is important to recognize that the towering price high in 2014 was spurred partially by fleet owners borrowing against the rising value of the medallions they already owned to finance further purchases. So while medallion prices are undoubtedly dropping, it may look worse because prices were experiencing a bit of a bubble in the first place. Indeed, a former head of the TLC stated in April “the (taxi) industry’s performance has not been as bad as the decline in medallion prices would suggest.” In other words, don’t mistake the price of medallions with the health of the industry overall.

Another obvious factor here is the decrease in liquidity since 2014. One sale in March and two in February of 2017 means one of two things are happening: either medallion owners can’t find buyers, or owners are holding on because they view a price rise or stabilization on the horizon. The prospect of a bailout could keep buoying prices, while easing restrictions on medallion transfers has increased the potential pool of buyers.

Unfortunately, there were so few alternative fuel licenses released or transferred that there was not much data to analyze. Handicap accessible licenses, however, had a more interesting story to tell:

caleb3

Here you can see that the handicap accessible licenses have actually appreciated in value over the same timeframe. (If the graph looks funky with the straight lines, that’s due to the initial auctions where these licenses were sold.) This is not an apples-to-apples comparison, because we have so little data post-2014, but the total lack of sales (for minifleet accessible) may be an indication that it’s not an asset worth liquidating.

One reason for this may be that Uber partners with cab drivers who own these handicap accessible licenses to help provide rides on their platform to users with disabilities. It seems intuitive then, that these specific medallions would continue to hold value.

But perhaps the most important factor in all this is the total size of the market. The market share of taxis has shrunk with the emergence of Uber and Lyft, but the overall size of the market is larger today:

caleb4

Note that taxi trip volume has begun to level out in late 2016 and 2017. Taxis can coexist with TNCs in some markets, especially in densely populated cities where the value of a street hail is higher.

Put all of this together, and it appears the reports of taxi death have been greatly exaggerated. While some form of the Uber effect certainly exists, insofar as general medallion prices are concerned, the decline is not quite as precipitous as some have reported and taxi ride volume is not disappearing overnight. Furthermore, the future price of all these medallions likely will be more dependent on the success or failure of autonomous vehicles than on competition from ridesharing services from here on out.

 

 

The data we compiled for the piece can be found here.


Image by Cameris

Even without Durbin Amendment repeal, Congress should pass the CHOICE Act

shutterstock_300056177

The following post was co-authored by R Street Outreach Manager Clark Packard.


House Financial Services Committee Chairman Jeb Hensarling, R-Texas, has done the yeoman’s work of putting together a host of fundamantal conservative reforms in the CHOICE Act. Although repeal of the Durbin amendment would have been a positive, pro-market reform, Congress should pass the bill even if this repeal is not included.

The most important provision of the bill allows banks the very sensible choice of maintaining substantial equity capital in exchange for a reduction in onerous and intrusive regulation. This provision puts before banks a reasonable and fundamental trade-off: more capital, less intrusive regulation. This is reason enough to support the CHOICE Act. Its numerous other reforms also include improved constitutional governance of administrative agencies, which are also a key reason to support the bill.

Accountability of banks

The 10 percent tangible leverage capital ratio, conservatively calculated, as proposed in the CHOICE Act, is a fair and workable level.

A key lesson of the housing bubble was that mortgage loans made with 0 percent skin in the game are much more likely to cause trouble. To be fully accountable for the credit risk of its loans, a bank can keep them on its own balance sheet. This is 100 percent skin in the game. The CHOICE Act rightly gives relief to banks holding mortgage loans in portfolio from regulations that try to address problems of a zero skin in the game model – problems irrelevant to the incentives of the portfolio lender.

Accountability of regulatory agencies

The CHOICE Act is Congress asserting itself to clarify that regulatory agencies are derivative bodies accountable to the legislative branch. They cannot be sovereign fiefdoms, not even the dictatorship of the Consumer Financial Protection Bureau. The most classic and still most important power of the legislature is the power of the purse.  The CHOICE Act accordingly puts all the financial regulatory agencies under the democratic discipline of congressional appropriations. This notably would end the anti-constitutional direct grab from public funds that was granted to the CFPB precisely to evade the democratic power of the purse.

The CHOICE Act also requires of all financial regulatory agencies the core discipline of cost-benefit analysis. Overall, this represents very significant progress in the governance of the administrative state and brings it under better constitutional control.

Accountability of the Federal Reserve

The CHOICE Act includes the text of The Fed Oversight Reform and Modernization Act, which improves governance of the Federal Reserve by Congress. As a former president of the New York Federal Reserve Bank once testified to the House Committee on Banking and Currency: “Obviously, the Congress which set us up has the authority and should review our actions at any time they want to, and in any way they want to.” That is entirely correct. Under the CHOICE Act, such reviews would happen at least quarterly. These reviews should include having the Fed quantify and discuss the effects of its monetary policies on savings and savers.

Reform for community banks

A good summary of the results of the Dodd-Frank Act is supplied by the Independent Community Bankers of America’s “Community Bank Agenda for Economic Growth.” “Community banks,” it states, “need relief from suffocating regulatory mandates. The exponential growth of these mandates affects nearly every aspect of community banking. The very nature of the industry is shifting away from community investment and community building to paperwork, compliance and examination,” and “the new Congress has a unique opportunity to simplify, streamline and restructure.”

So it does. The House of Representatives should pass the CHOICE Act.


Image by Erce

 

How congressional power became separate, but unequal

shutterstock_517337593

Recent polling shows that Americans are increasingly turned off by the rancor and high-stakes nature of our recent presidential elections. But don’t expect contests for the presidency to calm down anytime soon. Today, the modern American presidency is more powerful than ever, making the importance of the office paramount to partisans on both sides of the political aisle.

It’s important to remember, however, that the presidency wasn’t always viewed this way. The system established by our Founding Fathers went to great lengths to separate the powers of government both vertically and horizontally. If anything, the founders actually were more concerned about power accreting in the legislature than in the executive.

As James Madison warned: “[i]n republican government, the legislative authority necessarily predominates,” rendering it necessary to take certain “precautions” to “guard against dangerous encroachments.” In contrast, he noted that the “weakness of the executive … might require it to be fortified” in order to resist legislative power grabs. The text of the Constitution reflected the primacy of Congress, too: Article I of the document, which lays out of the legislative powers, is more than twice as long as Article II, which describes the executive’s role.

Over the past several decades, though, Congress has gradually lost its influential role, while the presidency has been ascendant. Today, the executive branch is a sprawling behemoth with more than 4 million employees, and presidents routinely advance policy goals by executive fiat rather than by working with Congress. Given Congress’ diminished state, it is important to consider how and why Congress has failed to maintain its role as the country’s “first branch.” A recent paper by Matthew Glassman of the Congressional Research Service lays out a primer on the history of the separation of powers, as well as providing clues about Congress’ dwindling status within that system.

As Glassman recounts, the notion of governmental power being comprised of distinct functions—lawmaking, administration and adjudication—can be traced back to the ancients, including greats like Aristotle, Polybius and Cicero. The theory was more fully developed in the 17th and 18th century by Locke and Montesquieu, who acted as intellectual guideposts to the American founders.

The key feature of the American tripartite system is that it placed the legislative, executive and judicial powers of government into distinct spheres, but also ensured that their powers overlapped in certain areas. For example, the president has veto power over congressionally passed legislation, while Congress has a say in executive branch appointments. In Glassman’s words, this setup produces conflict “by design,” allowing each branch to guard its power against encroachment from the other branches.

Glassman also identifies several institutional features that have allowed our system of separated powers to remain effective throughout most of our country’s history, such as distinct personnel, independent electoral bases and separate resources for each branch. But using a system of separated power to guard against the accumulation of power is only effective if the numerous branches are operating in relative equipoise.

Glassman’s paper is particularly insightful in analyzing why the power of different branches can ebb and flow over time. He highlights the perverse incentives individual actors within each branch face—incentives that can cause them to undermine their own branch’s long-term institutional power. These forces at least partly explain why Congress’ power has declined in recent times.

For one, Glassman notes that an individual actor within a branch may have personal policy positions that conflict with the long-term institutional interests of his or her branch. An example might be a member of Congress agreeing on policy grounds with a president’s decision to engage in a unilateral military strike, despite the fact that the president acted without consulting Congress.

Partisan affiliations also might cause individuals to take actions that undermine their branch’s institutional power. This phenomenon is commonly seen when members of Congress refuse to criticize a president of their own party publicly, even if they believe the president is acting beyond his power. The electoral goals and strategies of individual members of Congress can conflict with their own branch’s long-term interests.

Glassman recognizes that the problem of a branch’s institutional power conflicting with the personal goals of individual branch members is “particularly acute for Congress”:

As individual members of a large body, Representatives and Senators may not believe they have the responsibility or the capacity to defend the institution… Even when Congress does choose to institutionally defend itself, it often finds itself speaking with less than a unified voice, as only the most vital institutional powers have the ability to unanimously unify Congress.

These problems of collective action—the responsibility/capacity to defend the institution, the ability to speak with a unified voice, and the conflict with party or policy goals—rarely if ever occur in the executive branch. The unitary nature of the presidency ensures that the executive branch will ultimately always speak with one voice, and past presidents have often expressed— both in office and after retirement—a deep feeling of responsibility for the maintenance of the powers of the presidency.

These trends, of course, are not irreversible. Congress can fight back against executive branch encroachment, if it so chooses.

R Street’s Legislative Branch Capacity Working Group has identified numerous “Madisonian solutions” that would allow Congress to rebalance the separation of powers. Options include strategies to strengthen Congress itself—for example, by beefing up committee staffs and providing more funding for entities like the Congressional Research Service and the Government Accountability Office. Alternatively, Congress could seek to reduce the power of the presidency by clawing back power from federal agencies through comprehensive regulatory reform legislation.

In other words, Congress has the tools at its disposal to return our branches of government to a more equal footing. Members of Congress simply need to start prioritizing their branch’s long-term institutional interests over their personal preferences and predilections. Until that happens, we can expect the preeminence of the presidency—and the vitriol of presidential elections—to continue unabated.


Image by Pozdeyev Vitaly

Does Congress have the capacity it needs to conduct oversight?

LegBranch_5.23.17

Envisioned by the founders as the “first” branch of government, Congress has the responsibility of overseeing and managing the other two arms of our constitutional system. And yet, as the executive branch has grown in power and prestige, Congress has increasingly lost its authority.

What resources does Congress currently employ when overseeing federal agencies? Which current resources are well-used; which are under-utilized? What additional tools and resources does Congress need to engage in truly effective oversight? The Legislative Branch Capacity Working Group recently hosted a panel on these questions, moderated by R Street’s Kevin Kosar and featuring Morton Rosenberg of The Constitution Project and Justin Rood of the Project on Government Oversight. Video of the panel is embedded below:

To keep jobs in Missouri, special session should allow more options for renewable energy

missouri_capitol

As the legislature continues work during the special session, it needs to keep sight of the big picture. The case that motivated Gov. Eric Greitens to call the session—the loss of two plants in southeast Missouri due to high electricity costs—highlights the importance of cheap, reliable electricity to the economic health of the state. But if Missouri politicians are interested in sustainable growth in its energy sector, they need to go beyond legislating single cases and take a broader look at how the electrical system can become more attractive to employers and consumers alike.

Of course, there is an easy way to reform Missouri’s electrical regulations that will increase the state’s attractiveness to business while advancing the free market principles that the legislature—and voters—support.

As things stand, consumers are restricted to buying power from their utility company, local municipality or electric cooperative. This lack of choice can be burdensome, but it is a particular problem for businesses that have internal sustainability goals regarding energy use. Indeed, many large companies have set goals to receive a set percentage of their energy from renewable sources. Businesses adopt these goals to save on costs, satisfy consumer preferences and to underscore good corporate stewardship. In Missouri, however, many companies may not be able to meet their energy goals, because local utilities simply do not offer sufficient renewable electricity. For a business deciding whether to locate or expand facilities in the state, lack of options makes the choice clear.

During the recently ended regular session, the legislature considered the Missouri Energy Freedom Act, by Rep. Bill Kidd, which would have solved the scarcity problem by allowing companies to purchase renewable electricity from someone other than their official local provider. This legal structure has worked in other parts of the country, and has the potential to attract thousands of jobs to the state both from energy conscious employers and from potential renewable generators. Companies save money on their energy bills, but also shoulder the risk of new clean energy projects. That means one simple rule change can bring Missouri huge new investments, more profitable businesses, jobs in the community and clean energy to fuel the economy – all risk free.

Most important, however, this approach would bring more jobs to the state without increasing the role of government. Allowing PPAs involves no mandates, subsidies or government heavy handedness. It simply provides companies with another option. The proposal also requires utilities to be reimbursed for any costs associated with allowing other power generators access to the grid, essentially leveling playing field.

Seven of Missouri’s largest companies – General Mills, General Motors, Nestle, Procter & Gamble, Target, Unilever and Wal-Mart – are on record supporting this approach. Even the Department of Defense is supportive. But, it’s not just the big guys who stand to benefit. As long as you use enough power, you’d be able to lock in long-term, low prices for electricity through this new structure—a benefit small mom and pop firms will appreciate.

By allowing PPAs for renewable energy, Missouri can help keep tens of thousands of jobs in the state by opening up greater access to clean energy and increasing competition and free markets. Adding this element of competition should be part of the final legislative package for the special session.


Image by Gino Santa Maria

Nebraska should be on the cutting edge of spacesharing

shutterstock_486607120

Nebraska’s fame as a place for innovation and leadership is legendary. Even before Warren Buffet became the “Oracle of Omaha,” Nebraskans had invented CliffsNotes, the Reuben sandwich, Vise-Grip locking pliers, T.V. dinners, Kool-Aid and Arbor Day, just to name a few.

But that history makes it even more perplexing why this state, which so often has been on the entrepreneurial leading-edge, suddenly would turn against that heritage and ban useful modern innovations like Airbnb and other short-term-rental platforms that help travelers visit Nebraska’s great and historic cities. These services have helped fill the market gap for people coming to the College World Series and other exceptional crowd-drawing events, who sometimes have trouble finding a place to stay overnight within commuting distance.

The Unicameral, itself a unique feature of Nebraska innovation, has been considering legislation that would prevent local governments from outlawing short-term rentals by those who wish to make a little extra cash and perhaps meet some nice folks from out-of-state. Alas, it is having trouble finding a spot on the agenda.

Like similar bills around the country, the Nebraska bill would continue to allow local governments to prohibit sexually oriented businesses, unlicensed liquor sales, sales of street drugs and anything else in a short-term rental that would constitute a genuine public-safety hazard. Municipalities also could still regulate for noise, animal control, fire and building codes, nuisances and the like.

The crux of the resistance to statewide regulation seems to be that hotels, motels, resorts, inns, licensed bed and breakfasts and other clearly commercial operations are just flat-out opposed to what they view as additional competition unburdened by many of the fees and requirements of commercial hospitality. One of the compromises suggested is that the legislation be amended to require short-term-rental customers to pay the applicable hotel tax when they book, which companies like Airbnb already collect in many other communities.

Indeed, there are lessons for Nebraska in how these conflicts already have been resolved elsewhere around the country. New York City recently settled a lawsuit that challenged its statute setting fines of up to $7,500 for hosts who illegally list a property on one of a short-term-rental platforms, with the platforms concerned that the vague language could leave them on the hook.

A more recent settlement with the City of San Francisco could set a pattern for future legislation, in that two of the major short-term-rental platforms agreed to a registration process with the city, allowing hosts to know the requirements and giving them confidence that they are operating legally. Processes in Denver and New Orleans similarly work to pass host registrations through to the local governments.

There can be reasonable regulations to protect neighborhoods and public safety that stop well short of prohibition. Lawmakers and regulators should craft targeted rules that allow opportunities for people with a room to spare to match with tourists who can take advantage of an overnight stay. Nebraska—a reservoir of both good sense and an innovation ethic—has the chance to be a great model for other states with a well-crafted new law.


Image by paulrommer

 

WannaCry underscores a need for cyber hygiene and insurance

“Oops, your important files are encrypted” read the pop-up message on hundreds of thousands of Windows operating systems across the world. The ransomware cyberattack, infamously labeled “WannaCry,” paralyzed computers by encrypting their data and holding it ransom pending payments from the afflicted.

In the days following, headlines bemoaned the arrival of the long-feared “ransomware meltdown,” while critics jumped to blame Microsoft for product insecurities and condemned the National Security Agency for stockpiling vulnerabilities. While it’s easy to assign blame and stoke fear, policymakers should, instead, use the attack as an opportunity to encourage better cybersecurity behavior and sensible risk management practices – including cyber insurance.

Cyber insurance was first touted during the dot-com boom of the early 2000s, but has only recently grown in popularity. Like other types of insurance, cyber insurance offers financial protection from sudden and unexpected losses.

For instance, in addition to coverage for WannaCry-like ransom attacks, many policies now encompass a wide range of possible costs businesses may face associated with a breach, including regulatory fines, legal costs, public relations services and costs associated with internet downtime. Because cyberattacks can result in all sorts of unexpectedly large expenses, coverage designed to insulate a business from the financial shock of a cyberattack is vital.

In the case of WannaCry, the total illicit haul of the ransom is projected to be less than one hundred thousand dollars. Yet, downstream damages are expected to tally in the billions. In fact, one firm is projecting that up to $8 billion in global computer downtime costs may accrue to services ranging from hospitals and government agencies to car companies.

The consequences of that damage may, for some, be ruinous. According to Symantec, ransomware attacks have increased 36 percent from 2015 to 2016, while the average ransom has increased 266 percent in that time to $1077.

With the number of attacks on the rise, it is important to note that cyber insurance can both facilitate resilience and can also assist in the maintenance of system security. That’s because the underwriting process, during which the insurer assesses the risk it considers taking on, often requires a cyber risk assessment. Once a policy is written, specific policy terms often require adherence to basic security practices such as patching or regular network assessments. Companies that do not meet a threshold of cyber preparedness may not be eligible for coverage, may face higher premiums and could risk losing their coverage entirely. Put another way, cyber insurance coverage contributes to a culture of preparedness.

Cyber insurance take-up rates are growing, but the market is still evolving and penetration is uneven. According to a recent survey by Aon, only 33 percent of companies worldwide had cyber insurance coverage. Foreign countries are at a particular disadvantage when it comes to recovery because they hold less than 10 percent of all cyber insurance policies.

This is particularly worrying because WannaCry revealed a geographic gap in cyber preparedness. Russia and China saw the largest incidence of infected computers, suggesting that lax patching practices and overreliance on pirated or outdated systems is more common abroad. Those companies without coverage today face the full brunt of the costs associated with the WannaCry attack.

Though the domestic cyber insurance picture is better, more should be done to encourage coverage. For instance, while the White House’s recent cybersecurity executive order reiterated that cybersecurity is a priority area for the Trump administration, it was silent on the role cyber insurance can play in incentivizing agencies and their contractors to internalize cyber preparedness. This is a missed opportunity. The government can use the power of the purse to promote cyber insurance adoption in the market as a whole by requiring federal contractors to acquire certain types of cyber risk coverage.

High-profile cyberattacks like WannaCry highlight the need for cyber preparedness and cyber insurance. A policy approach that emphasizes both—and cyber insurance in particular as a market solution to the global ransomware problem—will be a boon for companies and consumers alike.

UPDATE (May 30, 2017): This piece originally cited a statistic attributed to the National Cyber Security Alliance that the alliance says is outdated.  

Another bright idea from Mitch Daniels

Mitch-Daniels

Purdue University President Mitch Daniels was in Washington last week to receive the Order of the Rising Sun, Gold and Silver Star from the government of Japan at an embassy ceremony. The award is one of Japan’s highest, and was given for “significant contributions to the strengthening of economic relations and mutual understanding” between the two countries.

During his time as governor of Indiana, Daniels saw 70 new direct investments in the state from Japan, including a Honda assembly plant that was the biggest “greenfields” investment in the United States in 2006. Over the following six years, Japan brought more than $2.6 billion of new investment and 8,400 jobs to the Hoosier State, as the governor led five economic missions to the country.

Since Daniels came to Purdue in January 2013, nearly $5 million of Japanese corporate research has come to campus. Largely because of the groundwork he laid, Indiana ranks second this year among the 50 states for best economic outlook, as measured through 15 important state fiscal policy variables laid out the 10th annual edition of the American Legislative Exchange Council’s “Rich State, Poor State” study.

He’s also accomplished a number of significant milestones at Purdue, including a six-year tuition freeze. There may not be another university in the country that plans to charge students less tuition in 2019 than was paid in 2012. The student loan default rate for Purdue graduates hovers around 1 percent. The Milken Institute ranked Purdue No. 1 for technology transfer among public universities without a medical school.

Now, the university is going to expand its offerings to millions of people online. Instead of committing to a multiyear project to build a significant online learning university, Purdue announced April 27 that it is creating a new public university (temporarily named “New U”) by acquiring most of the assets of Kaplan University, a competency-based online learning business of 15 campuses in the United States, 32,000 nontraditional students and nearly 80 years of remote-learning experience.

Kaplan offered the nation’s first totally online law school and has created study courses to review vast amounts of material for various accreditation and professional certification exams. It is a global provider of education programs in more than 30 countries and has forged partnerships with many colleges, universities, school districts and more than 2,600 corporations. The educational networking possibilities are nearly limitless. A whole new chapter of efforts to produce more affordable post-secondary opportunities, particularly for working adults, is likely to be launched by this marriage of a top public research institution and an online juggernaut in competency-based education.

According to reports, the Purdue faculty is not yet prepared to give its blessing to New U, which is an endemic feature of both disruptive initiatives and university faculties generally. Quick to embrace every progressive policy fad, it is less likely that the Purdue management will get an immediate pass from those participants in higher education with sinecures anchored in the traditional business model. But it is a model that deserves more consideration as workplace needs drive absorption of sophisticated technical knowledge and skills and leans toward affordable learning for the benefit of its students and the good of the country.

 

Sessions’ charging memo underscores need for Congress to pass reform

shutterstock_544448989

Attorney General Jeff Sessions’ memorandum instructing federal prosecutors to “charge and pursue the most serious, readily provable offense” against defendants signals a desire to return to a tough-on-crime stance. From the perspective of criminal justice reform, the most daunting aspect of these developments is a likely resurgent dependence on mandatory minimums.

As has been noted by John Malcolm, a criminal justice expert and director of the Heritage Foundation’s Edwin Meese III Center for Legal and Judicial Studies, reinstatement of stricter charging and sentencing policies is fully within the attorney general’s authority. We’ve seen it before from Attorney General Richard Thornburgh, who issued his own guidelines in 1989 requiring strict enforcement of all provable offenses. In the years since, there’s been back-and-forth directives from Thornburgh’s successors Janet Reno, John Ashcroft, Eric Holder and, now, Sessions.

But over that interim, experts have gathered evidence against mandatory minimums, finding that heavy use of these sentencing laws failed to reduce drug use or recidivism. Mandatory minimum sentences are fixed prison terms applied to specific crimes, which can range from five years to life imprisonment. They strip judges of the ability to use their own professional discretion to determine sentencing based on the facts at hand.

The aim of mandatory minimums during the height of the 1980s crack epidemic was, of course, to target drug kingpins and cartel leaders, in order to improve public safety. Prison populations surged, but it was primarily due to an increase of low-level offenders. With prisons now bursting at the seams and calls for the construction of newer prisons to house an ever-growing population of prisoners, taxpayers have had to shoulder the costs.

The most notable portions of Sessions’ memo are where he instructs that “the most serious offenses are those that carry the most substantial guidelines sentence, including mandatory minimum sentences,” which marks a deviation from the “smart-on-crime” approach under Holder. Sessions’ memo would ensure the U.S. Justice Department “fully utilizes the tools Congress has given” the agency.

But to the extent that it is Congress that provides the DOJ tools to enforce federal laws, Congress itself needs to reassess those tools. Sen. Rand Paul, R-Ky., has taken that exact strategy. Alongside Sen. Patrick Leahy, D-Vt., Paul has introduced the Justice Safety Valve Act in the Senate, while Reps. Thomas Massie, R-Ky., and Bobby Scott, D-Va., have done the same in the House. The legislation authorizes federal judges to provide more fitting sentences outside of a mandatory-minimum requirement.

During a press call Wednesday, Paul noted that momentum for change is likely to build if more members introduce more criminal justice reform bills. While acknowledging that reform advocates face an “uphill battle,” he also indicated that he is “having conversations with people” within the Trump administration willing to listen.

The call to enforce harsher charging and sentencing methods is a serious concern, especially the goal to revive the one-size-fits-all use of mandatory minimum sentences. However, seeking ways for Congress to set the tone and dictate what tools are available for judges and parties at the DOJ is currently the most effective way to remedy this recent course of events. It’s checks and balances at its finest.


Image by Brad McPherson

OPEN Government Data Act moves to Senate floor after markup

shutterstock_208794031

Legislation requiring federal agencies to publish their data online in a searchable, nonproprietary, machine-readable format has been cleared for the Senate following a May 17 markup by the Senate Homeland Security and Governmental Affairs Committee.

Sponsored by Sen. Brian Schatz, D-Hawaii, S. 760, the Open Public Electronic and Necessary Government Data Act is identical to an earlier Schatz bill that passed the Senate unanimously last year after analysis by the Congressional Budget Office determined it wouldn’t cost taxpayers any money.

What it would do is modernize government agencies and increase their effectiveness, while also allowing taxpayers to see how their money is spent. For these reasons, R Street joined more than 80 organizations—including trade groups, businesses and other civil-society organizations—in urging the Senate committee to pass these badly needed reforms.

The status quo makes it difficult for engaged citizens to view the spending data of the agencies they fund. A taxpayer interested in viewing the companies and organizations that receive federal grants and contract awards would need to have a license for the proprietary Data Universal Numbering System (DUNS). Dun & Bradstreet Inc., the company that owns DUNS, functions as a monopoly with respect to government contractor data.

In a 2012 report, the GAO claimed the costs of moving away from DUNS to a different system would be too great, but that was in a time of fewer alternatives. More recently, a Government Accountability Services 18F technology team study showed that government agencies across the world are beginning to use a 20-digit code called the Legal Entity Identifier (LEI). LEI is free for organizations and companies to use, as it is managed by the Global LEI Foundation, a nonprofit organization based in Switzerland. It would require no expensive upgrades.

Both the current and previous administrations have publicly supported transparency reforms for federal agencies. President Barack Obama introduced an Open Data Policy in 2013, and Matt Lira, a special assistant to President Donald Trump for innovation policy and initiatives, told an audience in April that financial transparency is still a priority for the White House.

Vested interested likely will still oppose the bill, which also has companion legislation, H.R. 1770, in the U.S. House. But given that it has support from both parties—an incredibly rare thing these days—as well as from the present and prior administrations, transparency advocates have room for optimism. The case for nonproprietary data standards and government transparency will now be in the hands of Congress.


Image by zimmytws

Big names weigh in on FCC’s net-neutrality rules

shutterstock_398393776

We seldom see a cadre of deceased Founding Fathers petition the Federal Communications Commission, but this past week was an exception. All the big hitters—from George Washington to Benjamin Franklin—filed comments in favor of a free internet. Abraham Lincoln also weighed in from beyond the grave, reprising his threat “to attack with the North” if the commission doesn’t free the internet.

These dead Sons of Liberty likely are pleased that the FCC’s proposed rules take steps to protect innovation and free the internet from excessive regulation. But it shouldn’t surprise us that politicians have strong opinions. What about some figures with a broader perspective?

Jesus weighed in with forceful, if sometimes incomprehensible, views that take both sides on the commission’s Notice of Proposed Rulemaking, which seeks comment on scaling back the FCC’s 2015 decision to subject internet service to the heavy hand of Title II of the Communications Act of 1934. Satan, on the other hand, was characteristically harsher, entreating the commissioners to “rot in Florida.”

Our magical friends across the pond also chimed with some thoughts. Harry Potter, no doubt frustrated with the slow Wi-Fi at Hogwarts, seems strongly in favor of keeping Title II. His compatriot Hermione Granger, however, is more supportive of the current FCC’s efforts to move away from laws designed to regulate a now defunct telephone monopoly, perhaps because she realizes the 2015 rules won’t do much to improve internet service. Dumbledore used his comments to give a favorable evaluation of both Title II and the casting of Jude Law to portray his younger self in an upcoming film.

A few superheroes also deigned to join the discourse. Wonder Woman, Batman and Superman joined a coalition letter which made up with brevity what it lacked in substance. The same can’t be said for the FCC’s notice itself, which contains dozens of pages of analysis and seeks comments on many substantive suggestions designed to reduce regulatory burdens on infrastructure investment and the next generation of real time, internet-based services. Another, more diverse, coalition letter was joined by Morgan Freeman, Pepe the Frog, a “Mr. Dank Memes” and the Marvel villain (and Norse trickster god) Loki. It contained a transcript of Jerry Seinfeld’s Bee Movie.

Speaking of villains, Josef Stalin made known his preference that no rules be changed. But Adolf Hitler attacked Stalin’s position like it was 1941.

Then there are those with advanced degrees. Doctor Bigfoot and Doctor Who filed separate comments in support of net neutrality.

In a debate too often characterized by shrill and misleading rhetoric, it’s heartening to see the FCC’s comment process is engaging such lofty figures to substantively inform the policymaking process. I mean, it sure would be a shame if taxpayer money supporting the mandatory review of the 1,500,000+ comments in this proceeding was wasted on fake responses.


Image by Bonezboyz

Coppage talks urbanism on the Matt Lewis Show

R Street Visiting Senior Fellow Jonathan Coppage was a recent guest on the Matt Lewis Show, where he made the case for the Federal Housing Administration to re-legalize Main Street. Full audio is embedded below.

How Scott Gottlieb’s ‘boring’ approach could transform the FDA

IQ2

Dr. Scott Gottlieb, confirmed earlier this week by the U.S. Senate to become the new commissioner of the Food and Drug Administration, has a pragmatic—some might even say boring—approach to public health that could revolutionize how FDA regulations can fight the consequences of addiction.

With his vision of the future of tobacco, Gottlieb takes all the fun out of the heated arguments that anti-tobacco and pro-vaping individuals engage in on a regular basis – offering a reasonable solution to the disease burden of cigarettes. In a 2013 Forbes essay, he stated:

Whatever one thinks of cigarette makers, if the industry was earnest about transitioning away from the manufacture of smoked cigarettes, and getting into the development of new products that would still satisfy peoples’ taste for nicotine (with hopefully much lower risks) there could be public health virtue. The overall incidence of smoking related disease could be sharply diminished.

He acknowledges the enormous power the FDA has in the future of public health, particularly as it relates to tobacco consumption. He even has the guts to imply that “big tobacco” could actually be an ally in solving a problem many think they created, by encouraging cigarette manufacturers to focus on safer products and the e-cigarette market.

He recognizes the emergence of e-cigarettes present a viable alternative to other smoking-cessation products and that they have the potential to contribute to a future without combustible cigarettes. During his confirmation hearings, Gottlieb stated that reduced-harm products should be available to consumers to transition off combustible cigarettes, and he has taken note of the burdensome regulations that will be put on small businesses who want to enter the e-cigarette market, under currently scheduled FDA vaping rules.

These comments suggest that he would be open to regulations that make it easier for safer products to enter the market, rather than the currently planned deeming regulations, which would require nearly all existing e-cigarette products to go through a pre-market tobacco application (PMTA) process that would cost approximately $300,000 for each combination of flavor, strength, mixture and device. In a harm-reduction model, this is important, because increased competition from small businesses in the e-cigarette market will increase innovation and production of even safer products, while decreasing the price point of products that are at least 95 percent safer than combustible cigarettes.

Furthermore, this harm-reduction approach also could be applied to the opioid epidemic, which Gottlieb has stated is the FDA’s top priority. Medication-assisted treatments—such as methadone and Suboxone—help nearly 40 percent of people with opioid-use disorders to abstain from heroin and other commonly abused opioids. Opioid antagonists—such as Narcan and Vivitrol—can be used to reverse overdoses and cut cravings. Pharmaceutical companies, both big and small, have an opportunity to improve upon medications that can be used to treat opioid addiction and its consequences. Gottlieb’s willingness to embrace a harm-reduction philosophy and his recognition that it is important to have a practical approach to expensive and time-consuming FDA regulations could further encourage small pharmaceutical companies to enter the pipeline of life-saving opioid addiction treatments.

During the confirmation process, Gottlieb received criticism for his ties to the pharmaceutical industry. But frankly, his recognition that the tobacco and pharmaceutical industries can help solve an addiction crisis that kills nearly half a million people a year is to be applauded. That level-headed vision is exactly what the FDA needs to reduce the economic and health burden of addiction in the United States.

In reshaping U.S. energy policy, Perry’s best model is his home state

shutterstock_377979835

The Federal Energy Regulatory Commission (FERC) held a high-stakes conference May 1 and 2 to address the contentious interplay of state policies and interstate wholesale-electricity markets. The week prior, Energy Secretary Rick Perry announced his department might intervene in state energy planning to protect baseload coal and nuclear generation.

Ironically, market experts speaking before FERC identified state interventions to bail out coal and nuclear as the most damaging form of intervention. Market experts noted that state subsidies and mandates for renewables also continue to stress market performance, but do not displace system “capacity” needs the same way that baseload subsidies do.

This marks a fundamental debate over the role of government in competitive electricity marks, which happens to intersect with federalist themes in various ways. In the case of the Northeast and some Mid-Atlantic states, FERC seeks to uphold the competitive functionality of electricity markets, while some states have undertaken anti-competitive interventions to dictate outcomes that are better determined by markets. On the flip side, the prospect of federal intervention in state energy planning runs completely counter to conservative arguments against the Clean Power Plan, even if they are (incorrectly) made in the name of national security.

This paints a convoluted picture for federalists, but the pro-market case is clear – interventions at both the state and federal level are unwarranted and destroy wealth.

At the FERC conference, state representatives reiterated their support for relying on markets because of the clear economic benefits (i.e., little to no sign of “re-regulation” interest). Yet they also wanted to preserve the option to pick government-preferred investments, which runs counter to the very premise of liberalized electricity markets. Constructively, states expressed a willingness to engage in dialogue, which was marked by identifying policy principles, but they struggled to articulate what those objectives were.

Much of the challenge that state representatives—often, commissioners of public utility commissions—had in articulating an energy vision consistent with market principles is that politicians back in their states often support industrial policy (i.e., government explicitly picking winners), creating a difficult agenda to reconcile with FERC’s obligation to uphold competition. The most common policy theme was to reduce emissions, yet the states largely rebuffed market-compatible approaches to reducing emissions – namely, emissions pricing. A couple states brought up the need for state actions to improve reliability, claiming (contrary to the evidence) that markets aren’t able to provide reliable service. They rehashed generic slogans, like the need for fuel diversity, which has no direct bearing on whether an electricity system is reliable.

Competitive electricity markets are complex and poorly understood by state and federal policymakers. Given the rapid transition of electricity technologies and fuels, coupled with the persistent political obsession to dictate what this mix “ought” to be, the scene is set for half-truths and false narratives to prevail. Whether it’s progressives pushing for more renewables, confused conservatives supporting interventions to preserve baseload or any other such combination, all these narratives fundamentally miss the point that the goal of smart policy is to encourage well-functioning markets.

Fortunately, the narrative that we should level the playing field and let technologies compete on their merits still holds some political weight. Some FERC reforms could move in that direction, such as enabling participation of energy storage and pricing of fast-start resources. However, FERC reforms to appease the industrial-policy ambitions of some states (or the U.S. Department of Energy) would fundamentally deviate from the core objectives of competitive electricity markets. This could easily result in extensive unintended consequences. It’s not FERC’s job to validate state policy, but it must pass judgement on anti-competitive conduct. Conference participants offered ideas on this definition, and FERC would be wise to continue that dialogue.

There continues to be an immense need to educate state and federal policymakers on how electricity markets function and of the consequences of industrial policy, especially ad hoc subsidies. While the uptick in state interventions stirred controversy, it has also spurred productive dialogue in the Northeast and between the states and FERC. The conference demonstrated a clear need and willingness among states, stakeholders and current FERC commissioners to continue and deepen such a dialogue.

The concern of the Northeastern and some Mid-Atlantic states to reduce emissions is laudable. Pollution is a valid market failure that can be corrected, efficiently, by market-based policies. Such policies have excelled in competitive markets, where strong cost-reduction incentives have driven emissions reductions and innovations that lower the cost of emissions abatement. That’s an example of where state and federal interests align, as well-functioning markets that internalize all costs create the most benefit for society. But many policymakers do not understand these benefits clearly, and FERC and states should engage market experts in forums that help foster and disseminate this research. One example to highlight is Texas, which has seen reductions in costs and emissions without the controversy and distortions of industrial policy.

Encouragingly, Perry noted that the president asked him to reshape U.S. energy policy in the mold of Texas, where he spent 14 years as governor. Texas relies on competitive markets to signal power-plant investments and price-responsive demand. These markets do not explicitly value baseload, nor should they. Rather, they value reliable operations by providing revenues to resources that perform, especially when supply scarcity drives price spikes.

The Texas markets handsomely rewarded baseload coal and nuclear in the past, when they were highly competitive during periods of higher-priced natural gas. Now, inexpensive gas and cost declines for gas generation and renewables (the latter partially resulting from subsidies) have heavily cut into baseload margins, even driving some into retirement. Yet, according to the independent monitor of the Texas market, the reliability outlook remains strong, as more gas and wind-generation come online (and, in the long term, solar). Meanwhile, consumer costs have tanked. The monitor emphasizes that the new resource mix underscores the need for efficient price formation. This is the product of quality market design (e.g., “scarcity pricing” to account for the market failure of having adequate resources) and market discipline, as interventions can dramatically distort investment decisions and freeze capital markets.

Perry would serve America well by encouraging the Texas model. Over the past few years, Texas has bolstered its scarcity pricing, while Texas legislators and regulators have let the market work. The Northeast and Mid-Atlantic have not done so, causing the need for FERC’s conference to address the uptick in disruptive state interventions. Once FERC re-establishes a quorum, it will face the tasks of improving price formation and moderating the effects of state interventions. Competitive markets will drive costs reductions, innovation and emissions reductions, but only if state and federal policymakers keep interventions at bay. As Marlo Lewis Jr. of the Competitive Enterprise Institute recently remarked, “subsidizing uneconomic energy to the detriment of consumers and taxpayers is no way to drain the swamp.”


Image by Crush Rush

Ridesharing a victim of Alaska’s budget battle

shutterstock_636156827

As Alaska’s legislative session ends, some perplexing in-house political gamesmanship has kept a popular bipartisan measure from making it to the House floor. It’s a shame, because freedom-loving Alaska now remains one of just a handful of states that still doesn’t allow ridesharing services such as Uber and Lyft to operate within its boundaries.

Senate Bill 14 has passed the full House and needs only a vote of the Senate to move it along to the governor’s office. But according to news reports, the House has bottled up the legislation in the Rules Committee, which rarely meets. The measure reportedly is being used as leverage by legislators fighting a contentious budget battle.

The Alaska Journal of Commerce reports that, instead of letting S.B. 14 get to the House floor and then presumably to the governor for his signature, “the House is effectively starting the legislative process anew by advancing its own version of Uber legislation.”

That’s bad for Alaskans, given the obvious benefits of allowing these services to operate. The bill wisely clarifies that these drivers are independent contractors, thus restricting various efforts to mandate the payment of myriad employee benefits and thereby keeping this a cost-effective option. It also would prevent Alaska cities from imposing their own onerous local restrictions on these services. The bill requires background checks for these TNC drivers, but it’s still overall a good step forward that’s backed by ridesharing companies.

Although Alaska political observers expect that a ridesharing bill will eventually get to the floor at some point this year or next, it’s mistake to delay the ability of these companies to offer not only jobs, but rides to people who need a convenient – or safe – way to get home.

The Economist recently reported on a new study suggesting that “the arrival of Uber to New York City may have helped reduce alcohol-related traffic accidents by 25-35 percent.” According to the U.S. Centers for Disease Control, Alaska has one of the nation’s highest rates of excessive drinking. Uber and Lyft aren’t a panacea for such a significant social problem, but they could make the streets of Anchorage, Juneau and Fairbanks a bit safer.

With a week to go before the session’s end and a budget crisis looming, legislators might have other things on their mind. But the budget problem will eventually get fixed. Alaska residents who want to use ridesharing services shouldn’t be held hostage to that process.


Image by Joseph Sohm

California looks to finally end the Cold War

shutterstock_294095897

The Cold War ended decades ago, but vestiges of the conflict still surround us. In the California Legislature, Assemblyman Rob Bonta, D-Oakland, has introduced Assembly Bill 22, which seeks to bring one chapter of that history to a close.

A.B. 22 would replace a nearly 80-year-old prohibition that barred members of the Communist Party or individuals who otherwise advocated communist ideals from employment by the State of California. In its place, the bill would impose an ideologically neutral prohibition on employing anyone who actively seeks the forceful or violent overthrow of the government of the United States.

There can be no doubt that communism was a blight on the 20th century. In its name and under its red banners, hundreds of millions of people were killed. And it is well-known that the chief geopolitical rival of the United States through the second half of that century—the Soviet Union—was a power animated by communist ideology.

It is therefore no wonder that, in an effort to ensure the state’s government institutions were not subverted by those who would like to see the Soviet Union best the United States, legislators placed in statute a prohibition on employment for anyone with ongoing ties to, or outspoken sympathy for, the Communist Party. The party, like the Soviet Union, was understandably viewed with extreme prejudice by lawmakers who felt threatened by those who sought to topple market-oriented liberal democratic institutions. In fact, the text of the existing California law goes into great detail about the consequences of communism and spells out unambiguously the threat post by a Communist fifth column.

But while it remains helpful to examine the history of communism to better understand dictatorial barbarism and anti-democratic preferences, the time has come to correct the mistakes that legislators of decades past made when they needlessly trampled their own values by targeting people’s beliefs, rather than their actions. AB 22 does that.

To be clear, there aren’t legions of communists waiting to enmesh themselves in California’s bureaucracy, so it’s a bit strange that a lawmaker would feel so strongly as to want to carry the legislation. But Republican opposition to Bonta’s bill is no more explicable.

Bonta’s bill doesn’t diminish our recognition of the repugnant nature of communist ideology. That ideology was, and remains, an affront to individual liberty and dignity. But it is the liberal aspirations of the United States—which preclude discriminating on the basis of one’s political beliefs—that set the country apart from the Soviet Union in the first place.


Image by StrelaStudio

R Street hosts Justice for Work coalition panel

The R Street Institute hosted an April 17 launch party for the Justice for Work Coalition. Justice for Work is a coalition of organizations spanning the ideological spectrum that seeks to raise awareness and advocate for lowering the barriers created by laws and regulations that unnecessarily restrict economic participation.

The event included a panel discussion featuring former law enforcement officers, an ex-offender, and policy and legal experts.

The panelists were:
Arthur Rizer, R Street Institute
Ed Chung, Center for American Progress (moderator)
Teresa Hodge, Mission: Launch
Marcus Bullock, Flikshop
Alvaro Bedoya, Georgetown University Law Center

Full audio of the panel is embedded below.

CRS should stop fighting access to its own reports

shutterstock_276493547

The Congressional Research Service plays an essential role in policymaking and oversight. It makes Congress smarter about issues and teaches new legislators how to legislate. I would not have spent 11 years working at CRS if I did not think very highly of the institution.

But there is one topic on which the widely esteemed and nonpartisan agency has been embarrassingly biased: the proposals to make its reports more equitably available to the public.

As a practical matter, CRS reports are available – 27,000 copies can be found on government and private-sector websites. EveryCRSReport.com, for example, has more than 8,000 reports. But official congressional policy does not provide for consistent public release of the reports, which explain the workings of Congress, agencies and myriad public policies.

Legislation has been introduced in this Congress and last Congress to fix this situation, and a number of times previously. Reps. Mike Quigley, D-Ill., and Leonard Lance, R-N.J., would have the Government Publishing Office post the reports on GovInfo.gov. This solution would give citizens a central repository to go to read authenticated copies of the reports, and would relieve CRS and congressional staff of the hassles of responding to reporters, lobbyists and constituents who ask for copies.

Inevitably, CRS proclaims aloud that it takes no position on the issue and will do whatever Congress directs. But how are we to square that claim with this 2015 memorandum that CRS’ leadership shopped to legislators? The memorandum is modestly titled: “Considerations arising from the dissemination of CRS products.” The content, however, is nothing but scare-mongering speculation about bad things that might happen if more Americans had access to CRS reports. Proponents of expanded access to CRS reports quickly demolished the claims made in CRS’ “considerations” memo.

As someone who once reviewed CRS reports before they were published, I can tell you that, had a CRS analyst written this memo, it never would have seen the light of day. And said analyst would have been rebuked by his or her supervisor. The memorandum not only misconstrues what is being proposed — nobody is advocating that CRS itself distribute the reports—but it also makes no mention of the many possible benefits of a change in policy (like increased public understanding of how Congress and government operates).

That means the memo violates CRS’ own very clear policies that its work for Congress must be accurate and unbiased, and must consider the possible benefits and costs of any proposed policy. (This internal CRS rule not only is intellectually honest, it also, ahem, protects the agency from having its work give the appearance of bias.)

One hopes that someone in Congress would call CRS leadership to the carpet on this tartuffery, and demand the agency to disavow the memorandum. In a time when federal budget cuts are being seriously discussed, the agency does itself, its employees and Congress no favors by being the lone voice advocating against common-sense reform.


Image by Micolas

Fierce debates dominate D.C.’s first E-Cigarette Summit

shutterstock_541060171

If you imagined an e-cigarette conference full of policymakers at a Marriott in Washington would be a tame event, you would be wrong. I suppose I shouldn’t be surprised that e-cigarettes could a polarizing topic, but I will not soon forget the cheers and boos in the crowd as people stood up to state their opinions and present their research at the first E-Cigarette Summit here in Washington.

A running theme of this conference came down to the existential question: are you a skeptic or are you an enthusiast? Are e-cigarettes addictive products designed to hook teenagers or should they be marketed to current smokers as a quitting tool?

It’s important to understand that e-cigarettes are much safer than combustible cigarettes. Every panelist—including professors, physicians, economists and industry folks—agreed with reports that e-cigarettes are at least 95 percent safer than traditional cigarettes. What is not so easily understood is how best to use e-cigarettes to promote a healthier society.

We’ve seen debates like this before. Will needle-exchange sites keep injection users free from infectious disease or will they tacitly encourage people to try heroin? Does condom distribution in high schools prevent teen pregnancy or lead to a breakdown of morals? There are valid points in support of either argument, but whichever way we as a society land will have long-lasting effects.

The truth is there are a lot of specific questions that need to be answered before people will feel comfortable with novel devices. When it comes to e-cigarettes, there needs to be a balance between consumer protection, trust and the application of science, so that sound policy can best direct public health goals. Some of the discussions at this forum centered on questions for which we don’t yet have definitive answers:

  • Does a standalone nicotine product at the concentrations found in e-cigarettes (with the absence of other chemicals that are present in tobacco) produce changes in the brain consistent with addiction?
  • What environmental or product factors are predictors of successful transition from combustibles to e-cigarettes?

As we move forward in our research and advocacy endeavors, the answers to these questions will help shape both tobacco and e-cigarette policy and will form a foundation for U.S. harm-reduction policy.

Some of the more contentious issues created even more forceful debates. While e-cigarettes are effective smoking-cessation tools, physicians are reluctant to recommend them over medications, gum or the patch. Although teen smoking rates are at historic lows, the rise in experimentation of e-cigarettes is concerning (it is noteworthy that daily use of e-cigarettes among teens is 2 percent). While it is unethical to perpetuate the myth that e-cigarettes are nearly as harmful as traditional cigarettes, some suggest there might be an ethical dilemma in marketing e-cigarettes to recreational users.

It is fair to say that more information is better to avoid hooking a new generation on cigarettes, but it is more important to use the tools we have now to encourage smokers to switch to safer products. We cannot forget that, just today, more than 1,300 people will die from smoking in the United States alone. Getting people to stop smoking combustible cigarettes should be our No. 1 priority and there is now a product to make that happen.


Image by LezinAV

Fixing California’s bloated sex-offender registry

shutterstock_54425851

R Street just signed a letter calling for commonsense reform of the California sex-offender registry, based on a bill proposed by our friend and Legislative Advisory Board member Sen. Joel Anderson, R-Alpine.

The bill we’re supporting in California, backed strongly by our own research, creates a tiered system for adult sex offenders. This is a step in the right direction to reform California’s overgrown and overly large sex-offender registry. A registry that includes too many people is likely even worse than one that includes too few: it diverts resources toward monitoring low-risk people that should be devoted to monitoring the relative handful of truly dangerous offenders. The best available research on sex-offender registries, which I summarized in this article for National Affairs, indicates that risk-based approaches like the one contemplated in the bill are good public policy.

While taking this first step is important, it doesn’t solve what is likely the single biggest problem with sex-offender registries: their inclusion of offenders who were adjudicated as juveniles. As I’ve written about here with my friends Nicole Pittman and Stacie Rumenap, it’s unjust, cruel and undermines the purpose of the juvenile justice system—which, at least in theory, is supposed to act in offenders’ own best interests. Youth registration, as R Street research has shown, costs millions of dollars more than it could possibly save. It’s the single greatest inefficiency in our sex-offender registration system.

The California bill is a good start, but it’s only a start. If the Golden State really wants to fix its registry, it’s going to have to end the registration of children.


Image by Jeffrey B. Banke

Congressmen reintroduce bill to make CRS reports public

AAEAAQAAAAAAAAyYAAAAJGI2YjY3OTc4LTM1ZjgtNDJjNy05NTJhLTBmYzE2YzQ2NzE4MA

The Government Publishing Office would be required to make Congressional Research Service reports publicly accessible over the internet, under legislation reintroduced last week by Reps. Leonard Lance, R-N.J., and Mike Quigley, D-Ill.

The CRS, a division of the Library of Congress, is known as Congress’ in-house “think tank.” House offices and committees historically have been free to publish CRS reports on their own websites for constituents to view and some third parties aggregate CRS data on websites like everyCRSreport.com.

But while taxpayers spend more than $100 million annually to fund CRS, timely access to these important documents is usually reserved to Washington insiders. There exists no official, aggregated source for taxpayers to access the CRS’ valuable and informative work.

R Street Vice President for Policy Kevin Kosar, himself a veteran CRS analyst, testified recently before the House Legislative Branch Appropriations Subcommittee, where he presented the panel with a letter signed by 25 former CRS employees with more than 570 combined years of service who all support an open public database of nonconfidential CRS reports.

There is strong precedent for public access to legislative support agency documents. In his subcommittee testimony, Kevin noted the Government Accountability Office, Congressional Budget Office and the Law Library of Congress all make their reports public, as do the 85 percent of G-20 countries whose parliaments have subject-matter experts.

Proposals like the Lance-Quigley bill would place publishing responsibilities with another entity, to ameliorate CRS concerns about the service having to publish the reports itself. Briefings and confidential memoranda would not be disclosed and data issued to the public through a searchable, aggregated database would only include nonconfidential information.

As Kevin noted in his testimony, the public deserves to be on equal footing with lobbyists and the Hill.

 

Applying a BAT to reinsurance would be a big swing and a miss by Congress

shutterstock_626738702

As we all saw in recent media coverage of President Donald Trump’s 100th day in office, many observers treat the first 100 days of a new presidential administration as if were the only time that matters, a legacy that has been with us since President Franklin Roosevelt passed most of his New Deal agenda in the first three months of his administration in 1933.

But in some ways, the first 100 days of any new Congress or presidential administration actually is more like baseball’s spring training. It offers lawmakers the chance to warm up, get their teams set and plot out a game plan for the coming year. For baseball, the end of spring training is marked by the start of competitive play. As of last week, Washington’s spring training is closed and it is time to play ball.

The president, congressional leaders and Washington’s many think tanks all have their versions of what comprehensive tax reform should look like, and frankly, everyone is all over the field. One of the biggest issues under debate is a plank from the House Republicans’ plan called the border-adjustment tax, or “BAT.” If Washington isn’t careful, this plan could turn into one giant swing and a miss, particularly when it comes to the reinsurance market.

For a quick trip around the bases, essentially, under the BAT, companies will no longer be able to deduct the costs of imported goods and services. Meanwhile, any company that exports or profits from foreign sales will now enjoy that income tax-free. The debates over whether or not this will be a good thing for the U.S. economy tend to focus on a very few select points. However, if the subjects of insurance and reinsurance are left on the bench, we are going to find ourselves wishing for a rainout.

Right now, it’s unknown whether House Republicans still intend to go forward with their plans for a BAT, much less whether it would apply to financial services like reinsurance – something that only one country (China) of the 160 that employ the conceptually similar value-added tax does. If Congress chooses to follow in China’s footsteps, we have a problem.

In order to take on the risks of events like Texas hailstorms, Missouri tornadoes, Florida hurricanes and California earthquakes, property insurance companies cede portions of those risks to the global reinsurance market, where they are pooled with risks like earthquakes in Japan, floods in the United Kingdom or terrorist events in France. By pooling portions these uncorrelated risks from around the globe, the reinsurance market makes it possible for Americans to buy affordable insurance for their homes, vehicles and businesses.

If Congress decides to pass a BAT system that would apply to reinsurance, the cost to American consumers would be painful. A recently released study by the Brattle Group looked at the effects of a BAT on the reinsurance market and found U.S. consumers would have to pay between $8.4 billion and $37.4 billion more each year just to get the same coverage. Several of my colleagues recently have conducted more targeted research that, over the next decade, the tax would add $3.39 billion to the cost of property insurance in Texas and $1.11 billion in Louisiana.

Applying a border-adjustment tax to reinsurance would be a pitch in the dirt for American consumers and Congress shouldn’t swing. Insurance companies will be put in the unwinnable position of having to raise their prices and offer less coverage. The end result is higher costs, with more risk concentrated on American shores. That’s a bad call for everyone.


Image by smspsy

Three years in, what does the DATA Act tell us about agency spending?

shutterstock_208794031

Trying to figure out exactly how much money the federal government spends long has been an exercise in futility for those few brave souls who endeavor to try it. Though the U.S. Treasury has published financial data since the beginning of the republic, the government has an uneven history, to say the least, when it comes to reporting agency expenditures.

Agencies traditionally have employed a hodgepodge of data and spending models that fail to adhere to a common metric. This makes it difficult for lawmakers and policy experts to wrap their arms fully around federal agency spending. Since at least the 1970s, efforts have been afoot to standardize government data, culminating in 2014’s Digital Accountability and Transparency Act, also known as the DATA Act.

The bill’s purpose was to make expenditures both more transparent and more accessible. It requires Treasury to establish common reporting standards across all federal agencies, with the data posted online in a publicly accessible format.

The DATA Act has been in the news again recently because the first agency reporting deadline is May 9, the third anniversary of the law’s passage. Right on cue, the DATA Coalition hosted a panel discussion and “hackathon” last week to let teams of data wonks work with some of the early datasets the agencies have provided.

Keynote speaker Rep. Jim Jordan, R-Ohio, emphasized the potential for uniform spending data to shape policy by helping lawmakers better understand the scope and size of government. That, in turn, could allow them to enact more meaningful reforms. As he put it: “If you don’t know where you are, it’s impossible to know where you’re going.”

The coalition also hosted a panel featuring three individuals who have been key to creating the uniform financial data standards the agencies now must use: Chistina Ho, deputy assistant Treasury secretary for accounting policy and financial transparency; Dave Zvenyach, executive director of General Services Administration’s 18F project; and Kristen Honey, senior policy adviser for the Office of Management and Budget’s chief information officer.

The panelists generally were optimistic about the implementation process, though each noted the difficulty involved in pursuing new endeavors within a convoluted bureaucracy like the federal government. Honey was sanguine about the potential for agencies to follow the lead of private industries that use open datasets for productive ends, noting that American taxpayers have “already paid for this data, so they should have access to it.”

She pointed to the example of the Department of Veterans Affairs’ synthetic dataset published last fall that will help them study mental health issues among military veterans. Honey also predicted that state and local governments were likely to follow suit on open data initiatives, which she hoped would help expose and weed out inefficiencies in government spending and operations across all levels of government.

The panelists also cautioned that many agencies likely will encounter difficulties aggregating and successfully publishing their spending data by the May 9 deadline. The concern was that if reports from the Government Accountability Office and agency inspectors general catalog widespread deficiencies around the first reporting deadline, it could lead the public and lawmakers to doubt the DATA Act’s efficacy.

James Madison famously claimed that the power of the purse was “the most complete and effectual weapon” that could be wielded by government. Increasing the standardization and transparency of government spending data will only help strengthen that power.


Image by zimmytws

Eli Lehrer at the New American Jobs Summit

R Street President Eli Lehrer was featured on a recent panel at the New American Jobs Summit, joined by Micaela Fernandez Allen of Wal-Mart, Tom Kamber from Older Adults Technology Services and Bill Kamela of Microsoft Corp., to discuss how technology and shifting economic needs are changing how workers prepare to join or rejoin the workforce. Video of the full panel is embedded below.

What’s wrong with e-cigarettes?

R Street Policy Analyst Caroline Kitchens recorded this recent video for PragerU on e-cigarettes, a safer alternative to traditional tobacco cigarettes that could help millions of smokers to quit.

Let’s get rid of Puerto Rico’s triple-tax exemption

shutterstock_467792156

Let’s ask a simple and necessary question: Why in the world is the interest on Puerto Rican bonds triple-tax exempt all over the United States, when no U.S. state or municipality gets such favored treatment?

The municipal bond market got used to that disparity, but in fact, it makes no sense. It is an obvious market distortion, on top of being unfair to all the other municipal borrowers. It helped lure investors and savers, and mutual funds as intermediaries, into supporting years of overexpansion of Puerto Rican government debt, ultimately with disastrous results. It is yet another example of a failed government notion to push credit in some politically favored direction. Investors profited from their special exemption from state and local income taxes on interest paid by Puerto Rico; now, in exchange, they will have massive losses on their principal. Just how big the losses will be is still uncertain, but they are certainly big.

Where did that triple-tax exemption come from?  In fact, from the Congress in 1917. The triple-tax exemption is celebrating its 100th anniversary this year by the entry of the government of Puerto Rico into effective bankruptcy. Said the 1917 Jones-Shafroth Act:

All bonds issued by the government of Porto Rico or of by its authority, shall be exempt from taxation by the Government of the United States, or by the government of Porto Rico or of any political or municipal subdivision thereof, or by any State, or by any county, municipality, or other municipal subdivision of any State or Territory of the United States, or by the District of Columbia.

That’s clear enough. But why?  Said U.S. Sen. James K. Vardaman, D-Miss., at the time: “Those people are underdeveloped, and it is for the purpose of enabling them to develop their country to make the securities attractive by extending that exemption.” All right, but 100 years of a special favor to encourage development is enough, especially when the result was instead to encourage massive overborrowing and insolvency.

It’s time to end Puerto Rico’s triple-tax exemption for any newly issued bonds (as there will be again someday). As we observe the unhappy 100th birthday of this financial distortion, it’s time to give it a definitive farewell.


Image by Filipe Frazao

Lehmann talks NFIP reform on NPR’s Marketplace

In the wake of devastating floods in Missouri, R Street Editor-in-Chief and Senior Fellow R.J. Lehmann was a guest on National Public Radio’s “Marketplace” to discuss why reforms to the National Flood Insurance Program that encourage more private market participation and risk-based rates are essential. The audio is embedded below.

Kevin Kosar on Fox 5 DC ‘On The Hill’

Vice president of policy at the R Street Institute Kevin Kosar appeared on Fox 5 DC’s “On The Hill” to discuss President Donald Trump’s first 100 days in office.

Cameron Smith talks Alabama’s backdoor booze tax.

R Street’s Cameron Smith joined the Matt & Aunie Show on Birmingham’s Talk 99.5 to discuss backdoor booze taxes in Alabama. Audio of the show is embedded below.

Kosar testifies to House Legislative Branch Appropriations Subcommittee on CRS reports

On May 3, 2017, R Street’s vice president of policy Kevin Kosar testifies before the Legislative Branch Appropriations Subcommittee in support of making Congressional Research Service reports available to the public.

More from Kevin Kosar on why CRS reports should be publicly available can be found here.

Greenhut on ‘damning’ UC audit

R Street Western Region Director Steven Greenhut was a recent guest on the John and Ken Show on KFI AM 640 in Los Angeles to discuss his piece of the Orange County Register discussing the recent unfavorable audit of the University of California system. Audio of the show is embedded below.

Puerto Rico’s inevitable debt restructuring arrives

shutterstock_379193659

“Debt that cannot be repaid will not be repaid” is Pollock’s Law of Finance. It applies in spades to the debt of the government of Puerto Rico, which is dead broke.

Puerto Rico is the biggest municipal market insolvency and, now, court-supervised debt restructuring in history. Its bond debt, in a complex mix of multiple governmental issuers, totals $74 billion. On top of this, there are $48 billion in unfunded public-pension liabilities, for a grand total of $122 billion. This is more than six times the $18.5 billion with which the City of Detroit, the former municipal insolvency record holder, entered bankruptcy.

The Commonwealth of Puerto Rico will not enter technical bankruptcy under the general bankruptcy code, which does not apply to Puerto Rico. But today, sponsored by the congressionally created Financial Oversight and Management Board of Puerto Rico, it petitioned the federal court to enter a similar debtor protection and debt-settlement proceeding. This framework was especially designed by Congress for Puerto Rico under Title III of the Puerto Rico Oversight, Management, and Economic Stability Act (PROMESA) of 2016. It was modeled largely on Chapter 9 municipal bankruptcy and will operate in similar fashion.

This moment was inevitable, and Congress was right to provide for it. It is a necessary part of the recovery of Puerto Rico from its hopeless financial situation, fiscal crisis and economic malaise. But it will make neither the creditors, nor the debtor government, nor the citizens of Puerto Rico happy, for all have now reached the hard part of an insolvency: sharing out the losses. Who gets which losses and how much the various interested parties lose is what the forthcoming proceeding is all about.

The proceedings will be contentious, as is natural when people are losing money or payments or public services, and the Oversight Board will get criticized from all sides. But it is responsibly carrying out its duty in a situation that is difficult, to say the least.

There are three major problems to resolve to end the Puerto Rican financial and economic crisis:

  • First, reorganization of the government of Puerto Rico’s massive debt: this began today and will take some time. In Detroit, the bankruptcy lasted about a year and a half.
  • Second, major reforms of the Puerto Rican government’s fiscal and financial management, systems and controls. Overseeing the development and implementation of these is a key responsibility of the Oversight Board.
  • Third—and by far the most difficult step and the most subject to uncertainty—is that Puerto Rico needs to move from a failed dependency economy to a successful market economy. Economic progress from internally generated enterprise, employment and growth is the necessary long-term requirement. Here there are a lot of historical and political obstacles to be overcome. Not least, as some of us think, is that Puerto Rico is trapped in the dollar zone so it cannot have external adjustment by devaluing its currency.

The first and second problems can be settled in a relatively short time; the big long-term challenge, needing the most thought, is the third problem.

The story of the Puerto Rican financial and economic crisis just entered a new chapter, but it is a long way from over.


Image by bobby20

Rep. Ken Buck on the Federal Budget Accountability Act

shutterstock_501310024

The Federal Budget Accountability Act—introduced last month by U.S. Rep. Ken Buck, R-Colo., as H.R. 1999—is a short bill, barely two pages long. But it aims to help Congress answer a basic oversight question: how much revenue does the federal government actually receive each year from offsets?

As part of the congressional budget process, Congress gathers estimates of revenues to be received by the federal government, which can be used to “offset” authorizations for spending. For example, as a Buck press release points out, Congress authorizes the Strategic Petroleum Reserve to sell oil. “However, the price of crude oil continuously fluctuates … [which] creates uncertainty regarding the accuracy of Congressional Budget Office projections versus actual revenue received through offsets.”

I had the chance to speak about the bill with Buck, who came upon the issue soon after he arrived in the House in January 2015. “There was not a moment when a lightbulb went off. It was a series of statements about how new spending was ‘paid for,'” he said.

On its face, Buck’s bill may seem utterly unobjectionable. It requires nothing more than that the Office of Management and Budget annually report to Congress on the actual revenues received from offsets. Obviously, it is a basic fiduciary duty to discern whether the revenues received actually cover the costs as intended. A few members of the House Budget Committee are cosponsoring the legislation.

But will H.R. 1999 advance? It’s not clear. Buck suspects that additional spending is being passed off as budget neutral by the misuse of overly optimistic offsets. (On offsets and spending amendments in the House, see this CRS report.) “If they pass the bill, the misrepresentations will be known,” he told me. Enacting the legislation could collectively call out Congress and make the already tough debates over mandatory spending more difficult. “Nobody wants to know what the answer is,” Buck reports, “but we all know. … We just don’t know how bad it is.”


Image by lkeskinen

Dodd-Frank reform must include repealing the Durbin amendment

shutterstock_252948091

Many of us know what a “seven-year itch” is. Between the famous Marilyn Monroe movie of the 1950s and the legendary Roseanne Cash song of the 1980s, it is a fairly well-understood turn of phrase.

Congress finally got around this past week to scratching one the most economically painful and fairly literal “seven-year itches” by starting the process to roll back the Dodd-Frank Act, which will turn seven this July.

The Financial CHOICE (Creating Hope and Opportunity for Investors, Consumers and Entrepreneurs) Act—currently before the House Financial Services Committee—has many bright ideas and could serve as a great replacement for the burdensome Dodd-Frank bill of the Obama years. However, in the midst of this happy occasion, the American consumer needs to pay close attention, because Congress may in the end do something stupid.

A behind-the-scenes effort is underway let a Dodd-Frank provision commonly referred to as the “Durbin amendment” remain in the law. If you have a checking account, you should not let Congress keep this law on the books. Chairman Jeb Hensarling, R-Texas, took a strong stand in calling for repeal of the Durbin amendment as part of the CHOICE Act, and the committee should follow his lead by keeping that repeal in the final mark-up.

The Durbin amendment affects literally anyone with a checking account and a debit card. It requires the Federal Reserve to impose artificial government price controls to cap what banks charge to retailers for what are referred to as “interchange fees,” which banks use to pay for the security they provide for customers’ accounts. The cap is set far lower than it would be in a free market, creating a host of unintended consequences.

Before the government interference, banks and credit unions would use these fees to cover more than just security. They would use the revenues to offer perks to their customers, like free checking or point rewards system similar to what we see with traditional credit cards. Studies have shown these perks are worth millions in value to customers. But thanks to the Durbin amendment, banks have been forced to scale back their perks dramatically. The end result has hurt consumers, particularly those—like lower-income families or younger customers—who rely heavily on their checking accounts to conduct financial transactions.

While checking-account customers lost out, retailers (especially big-box retailers) made out like bandits. In 2010, the major retailers’ lobby sold Congress on limiting these transaction fees, promising they would pass along the savings to their customers. As of today, there is no evidence that has ever happened. In fact, an analysis of Federal Reserve data shows retailers have made off with more than $42 billion in foregone interchange fees over the last seven years. Shoppers have seen virtually no decrease in prices, even as they watched as many of their banking benefits disappear.

As the Financial Services Committee wraps up its hearings on the CHOICE Act, it’s important for the American people not to sit by idly. The Durbin amendment was sold in 2010 as protection for the American people, but the data prove the only protection it offers is to the major retailers’ profit margins. The House Financial Services Committee should strive to repeal the Durbin amendment, as should the full House when it hits the floor.


Image by alice-photo

Congress’ ‘cotton fix’ just another corporate welfare handout

shutterstock_508062640

Spring is a special time in Washington, filled with many wonderful traditions. Between the blooming of the cherry blossoms, the White House Easter Egg Roll and the Washington Nats’ Opening Day, the nation’s capital is full of action.

However, none of these events compare to Congress’ favorite perennial tradition: trying not to shut down the government. After a two-week spring break, Congress is back, ready to work and horse-trading for votes to prevent a government shutdown. One of the items for “trade” currently being kicked around is a massive expansion of two corporate welfare programs. Referred to as the “cotton fix,” Congress is poised to expand the U.S. Department of Agriculture’s Agriculture Risk Coverage and Price Loss Coverage programs to include cotton as a covered crop.

The ARC and PLC programs already are hardly the gold standard of fiscal responsibly. When Congress created the programs in the 2014 farm bill, the projected costs were $18 billion over five years. They now are projected actually to cost $32 billion over that same time frame. If Congress is successful in adding cotton into the mix, the projected costs easily could be topped up by an additional $1 billion a year.

This might be understandable if there were some crisis in the domestic cotton industry that needed to be averted, but Big Cotton already a pretty cozy deal with Washington. Between subsidized marketing loans, trade promotion programs and economic assistance to cotton mills, the industry is well taken care of by American taxpayers.

And that’s not all the federal government does for them. Unlike many other crops, cotton growers can participate in the Federal Crop Insurance Program and get to ask taxpayers to cover 62 percent of their premiums. Furthermore, during negotiations that produced the last farm bill in 2014, the cotton lobby was able convince Congress to create a special program just for them called the Stacked Income Protection Plan (STAX). This cotton-only program has taxpayers covering 80 percent of the cost for policies that protect against “shallow losses” too minor to be covered under traditional crop insurance.

The cotton industry’s costs to American taxpayers don’t end there. The federal government is in the process of paying out $300 million to the Brazilian cotton industry as part of a 2014 settlement agreement with the World Trade Organization. The settlement was a way to resolve a longstanding trade dispute with Brazil over U.S. domestic cotton subsidies that violated WTO rules. The $300 million payment comes on top of about $500 million the United States paid Brazil from 2010 to 2013 over the same set of issues.

The STAX program was created in hopes that it would stave off future disputes with Brazil, but whether STAX meets WTO rules is itself still an open question among experts. What is certain is that adding cotton to the ARC and PLC programs would only raise the odds of more trade disputes that ultimately cost Americans more money.

Let’s be clear, cotton is still king in Texas and some other parts of the country and Congress knows it. Adding cotton to ARC and PLC isn’t a noble gesture to a struggling industry. It’s about more about making sure multimillion-dollar companies maintain their profit levels at U.S. taxpayers’ expense.

Congress made a deliberate decision to exclude cotton from these two program when they were created in 2014. For Congress to sneak more cotton in the back door of a must-pass bill would amount to yet another corporate welfare payoff, with taxpayers once again left holding the bag.


Image by Kent Weakley

Statewide ridesharing rules on the table in Louisiana

shutterstock_558571558

Louisiana may soon join the more than 40 states that have adopted some kind of statewide ridesharing rules, under legislation that would pre-empt parish and local governments from setting regulations and taxes on transportation network companies.

Sponsored by state House Transportation Committee Chairman Kenny Havard, R-St. Francisville, H.B. 527 would require TNCs to register with the Louisiana Department of Transportation and Development and to charge a “local assessment fee” equal to 1 percent of each “gross trip fare.” The 1 percent fee would be sent to local governments from whence rides originated, and part also would be collected by the state to administer the permitting process.

TNCs would be required, through their apps, to display the driver’s picture and license plate before the passenger enters the vehicle. The TNC would also be required to transmit an electronic receipt of the trip.

The legislation also imposes minimum requirements for drivers. The state would ban from working as TNC drivers all sex offenders, felons for up to seven years after their conviction and those convicted of an offense involving drugs or alcohol. The legislation also requires TNCs to adhere to all state anti-discrimination laws and laws providing for the transport of service animals. The law bans drivers from using drugs or alcohol while on duty and requires TNCs to post the policy on their website and provide a means for reporting violations.

In exchange for these requirements, the state would bar local governments and other authorities (including airports) from imposing their own requirements or imposing additional fees. Airports would be permitted to impose only those fees that taxi drivers already pay. Finally, the statute would clarify that TNCs are not taxi operators and are not bound by the taxis code of regulations.

Understandably, the proposal isn’t being received kindly by some in local government:

New Orleans Councilwoman Susan Guidry, who authored the city’s ordinance regulating ride-hailing services, said just a quick overview of the proposed law showed it fell short of the city’s ordinance in a number of ways. It has fewer insurance requirements, less stringent background checks, does not require random drug tests or drug tests after crashes and does not prohibit surge pricing in emergencies.

The proposed state law also does not include prohibitions on discrimination in pick-ups and drop-offs and would not require the ride-hailing services to provide data that could be used to verify whether such discrimination is occurring, something that is including in the city ordinance.

“Why would you create a law that was less protective when they have already agreed to operate under our city’s law which is more protective?” Guidry asked.

Of course, ridesharing companies already operate under a patchwork of rules and regulations. For example, three of the largest parishes in the metro New Orleans area—Jefferson, Orleans and St. Tammany—each has its own ridesharing ordinance, which differ from one another in details. Theoretically, it is possible to drive through all three parishes within an hour, depending on traffic. It doesn’t make sense literally to have to navigate that maze of regulatory regimes over such a short distance.

The Legislature should unleash the potential of the sharing economy statewide. It’s good for consumers and provides new opportunities for drivers to make ends meet.


Image by Ionut Catalin Parvu

Permissionless innovation vs. the precautionary principle

Jonathan Taplin worries the “unfettered monoliths” of Google, Apple, Facebook and Amazon undermine democracy and should be broken up. In Europe and elsewhere, this combination of companies is referred to collectively by the pejorative “GAFA,” a ubiquitous bogeyman and symbol of American cultural imperialism. Never mind that all four got where they are by creating tremendous value for consumers. Google organizes information, Apple makes the best phones, etc. They aren’t harming us, they’re making our lives better.

They also aren’t actual monopolies. Amazon faces off with online retailers operating on razor-thin margins. The iPhone only has 18 percent market share. Google has thousands of competitors in digital ads. Facebook could go the way of Myspace. None of these companies is free from competition, or in a position to exert monopoly power callously.

The author wants us to embrace precautionary regulation like the EU’s. But there’s a reason few big tech firms start there. It’s a good thing America’s best companies don’t have to ask permission to innovate or forgiveness for succeeding.

Westinghouse bankruptcy epitomizes failures of electricity monopolies

shutterstock_400120954

Westinghouse Electric Co. LLC—the nuclear power company that traces its lineage to the original Westinghouse Electric Corp., founded in 1886—has been forced to declare Chapter 11 bankruptcy, largely the result of immense delays and cost overruns at two nuclear construction sites, Alvin Vogtle and V.C. Summer.

The bankruptcy places a potentially huge financial burden on electric ratepayers in South Carolina and Georgia and underscores the need for nuclear technologies to reduce cost overruns. But it would be a mistake to blame the current state of nuclear technology itself for Westinghouse’s failure. The mess really stems from the perverse incentives of the natural-monopoly model, which rewards utilities for building capital-intensive “mega-projects” irrespective of investment risk.

The story dates back to the late 2000s, when Southern Co. subsidiary Georgia Power Co. and SCANA Corp. subsidiary South Carolina Electric & Gas Co. received state regulatory approval for their shares to build two reactors each at the Vogtle and V.C. Summer sites, respectively. To their credit, the utilities entered into fixed-price contracts (with cost-escalator provisions) with Westinghouse to build the nuclear facilities by a guaranteed date. This helped to mitigate some of the ratepayer risk of cost overruns.

However, the Westinghouse bankruptcy diminishes these guarantees, causing legal disarray amid speculation of rate increases to recover costs of finding new contractors to finish the projects. Both utilities have filed interim agreements with Westinghouse to administer cost-to-complete assessments over a transition period.

The original sales pitch to approve the nuclear projects rested largely on hedging high natural-gas prices, federal carbon regulation, meeting customer demand growth and taking advantage of federal nuclear subsidies. Over the past decade, natural-gas prices tanked, federal carbon regulation (cap-and-trade) never materialized and demand weakened. Now, it appears the utilities may lose the cost advantages of federal nuclear subsidies. Terminating the Westinghouse contracts may force Southern to prepay the outstanding balance on the $8.3 billion loan guarantee provided by the Department of Energy. Billions in cost escalations would continue to spiral if the projects don’t start operations by the end of 2020, which would render them unqualified for the federal production tax credit for nuclear.

Many independent analysts project that delay beyond 2020 is a given. But as the interim assessment period trudges along, the utilities are telling their regulators a different story. Both downplay the remaining time and costs of completing the projects, while expressing their desire to push forward. Meanwhile, Morgan Stanley & Co. assert that abandoning the nuclear projects is the most likely outcome. If regulators elect to complete construction, Morgan Stanley predicts future delays for the projects and estimated additional cost overruns at $5.2 billion for SCANA and $3.3 billion for Southern. By comparison, building an efficient natural gas power plant would cost roughly $2 billion for an amount of capacity equivalent to each nuclear project.

A strong case can be made that the utilities don’t even need the plants’ full capacity. The Southeast has a surplus of regional capacity, meaning that third-party sources would be available at little cost. But because regulated utilities don’t have an incentive to buy from third parties, it leads to a well-documented bias to self-build.

State legislation championed by the utilities exacerbated the perverse incentives of the regulated monopoly model. Georgia and South Carolina passed laws in the 2000s enabling utilities to recover costs via rate hikes during construction, rather than waiting until completion. The laws lower finance costs, but shifts risk to ratepayers. The change also diminishes regulatory scrutiny of costs, thus dampening utilities’ cost-control incentives. The South Carolina Small Business Chamber of Commerce has criticized the unintended consequences, which include undermining utility incentives to avoid cost overruns and lacking transparency and a process for public input on construction contracts.

The Westinghouse bankruptcy makes one thing clear: when legislators and regulators socialize risks and costs, consumers suffer. The regulated-monopoly model creates moral hazard, epitomized by capital-intensive mega-projects in which companies insulated from investment risks lack incentives to guard against those risks. These nuclear projects are just new cases of a century-old problem.

By the late 1980s, monopoly utilities around the world faced high costs and unwanted assets. The subsequent political pressure led to electricity-industry reforms to change incentives, the locus of decisions and risk allocation. Some states liberalized their electric industries in the late 1990s and 2000s and, despite transition challenges, realized the benefits of competitive markets, as merchant suppliers internalized investment risk. In these states, the investment consequences of unexpected policy changes and drops in natural-gas prices and electricity demand have been borne by the private sector, which has repositioned itself to maximize value in a new investment climate. Meanwhile, regulated utilities have sat on power plants that no longer offer the most economical means of producing electricity in order to continue collecting a rate of return on their asset base. Worse, some have embarked on ill-advised investments on the backs of captive ratepayers.

States that failed to learn from the boondoggle projects of regulated monopolies have repeated them. Electric ratepayers will eat much of the cost, even if regulators elect to abandon the nuclear projects, as was the case with mega-projects decades ago. Perhaps the silver lining is that policymakers in regulated-monopoly states finally will learn the appropriate lesson and join the second wave of competitive-electricity reforms.

Federal policymakers should keep in mind that nuclear still provides a strong value proposition as a reliable, zero-emissions resource. However, any technology that takes a decade to build and carries huge capital demands creates an enormous investment risk. For nuclear, the best hope comes in the form of small modular reactors (SMRs). These reactors offer major safety and operational benefits with potential for much lower cost-overrun risk. NuScale Power announced the first SMR submission to the Nuclear Regulatory Commission in January. Easing the regulatory burdens on SMRs would reduce artificial barriers to entry. If SMRs become commercially viable, procurement decisions should come from competitive forces, not rent-seeking monopolies and their regulators.


Image by Martin Lisner

Does Congress have the capacity it needs in foreign affairs?

leg-branch-foreign-affairs

The Constitution assigns Congress the power to declare war, fund the military, approve treaties and regulate commerce with other nations. Yet, over the past century, presidents have taken the leading role in foreign affairs. Today, the president heads an expanding executive branch security apparatus—one which has found itself mired in controversy many times.

What role does Congress play in foreign affairs in the 21st century? What duties should it have? Does Congress have the resources it needs? The Legislative Branch Capacity Working Group recently hosted a panel on the questions, moderated by R Street’s Kevin Kosar and featuring Kurt Couchman of the Defense Priorities Foundation and Katherine Kidder of the Center for a New American Security. Video of the panel is embedded below:

Florida House bill would make solar installations a pain

shutterstock_197075507

If you think getting home improvements approved and ultimately completed in Florida is a hassle now, if monopoly power companies get their way, just wait until you try to install solar panels.

A bill currently under consideration in Florida’s Capitol would impose extensive disclosure and needless paperwork requirements on sellers of rooftop-solar panels and other renewable energy systems—to include everything from performance guarantees to tax advice, insurance and a requirement to project future utility rates.

H.B. 1351 by state Rep. Ray Rodrigues, R-Estero, and S.B. 90 by state Sen. Jeff Brandes, R-St. Petersburg, both would implement provisions of Amendment 4 by exempting solar and other renewable-energy devices from ad valorem property taxes. The Senate bill sticks to its objective by simply codifying the amendment, which was approved by 73 percent of Florida voters last August. The House version, however, goes beyond implementing the amendment by regulating the sale, financing and lease of these energy-generation systems, in addition to imposing other conditions.

Indeed, some requirements prescribed in the bill appear to be reasonable at first glance, as they relate to safety and reliability. However, they are superfluous, since installers of these devices are already regulated by the Department of Business and Financial Regulation and are required to be licensed and insured. Additionally, consumers already enjoy legal protections against fraud and other deceptive transactions with Florida’s very tough Deceptive and Unfair Trade Practices Act.

One provision in the bill even requires installers to comply with undefined “standards” set by the local utility company, which would promote an inherent conflict of interest between the renewable electricity source and the utility that stands to lose business from it.

Nevertheless, proponents cite “consumer protection” as justification for these onerous requirements, as so often is the case with excuses for a swelling nanny state to protect us from ourselves. In reality, all too often, these are nothing more than crony capitalist attempts to protect other industry players.

That, in fact, appears to be the case here. Utility companies have historically been the only option available to purchase electricity. With the rise of solar and dramatic decreases in the cost of renewable energy, consumers now have an alternative. Utilities obviously perceive this as a threat to their business model and businesses unaccustomed to competition generally do not like it.

So while they cannot altogether ban the sale of solar panels and the like, what better way to discourage their purchase than to complicate the process to obtain them? According to a recent Miami Herald investigation, some of H.B. 1351’s language actually was drafted by Florida Power & Light, the state’s largest utility.

If there are legitimate safety or consumer protection concerns with the sale of renewable-energy generation systems that current law does not address, a debate should indeed be had and legislation to address it considered. However, the bills currently under consideration should stick to implementing and codifying the amendment Floridians overwhelmingly approved—not shielding utility companies.


Image by travelfoto

Missouri ridesharing bill moves to Gov. Greitens’ desk

shutterstock_210713662

Legislation legalizing ridesharing services in the Show-Me State now sits on Gov. Eric Greitens’ desk, after the Missouri House passed statewide rules for transportation network companies by a 144-7 vote last week. The state Senate had already cleared the measure by a vote of 31-1 a few days earlier.

As the Associated Press described the bill:

The legislation would require that companies pay a licensing fee and adhere to a nondiscrimination policy. It would exempt them from local and municipal taxes and require drivers to submit to background checks and purchase vehicle liability insurance.

Missouri cities, like many others around the country, initially were cool to ridesharing, throwing up regulatory impediments to halt the services’ spread. By the time R Street issued its second Ridescore report in December 2015, there were only 15 states that did not either have or were considering statewide legislation, typically focused on mandatory insurance, taxes and background checks. Today, there’s only a handful of states that have not yet passed statewide rules.

In the first Ridescore report in November 2014, Kansas City earned a D- for overall friendliness to for-hire transportation services and an F for its treatment of TNCs. Those grades improved slightly to a C and a D, respectively, in the second report, though both remained several grades lower than the average and median scores in the 50-city study.

Enacting a statewide law has been a priority for House Speaker Todd Richardson, R-Poplar Bluff, and other Missouri lawmakers focused on job creation. Uber has projected an additional 10,000 jobs for the state through expansion of its ridesharing app service. Floor remarks by legislators from Springfield—where both Uber and Lyft now operate—indicated more people have been able to get downtown since that city moved to allow ridesharing services in the capital.

The compromise that attracted enough support for the large vote in both houses specifies that Uber, Lyft and other ridesharing services must pay city taxes and be liable for pickup fees at the airports. They do not have to pay meter inspection or other license fees, and they are permitted to charge higher prices for busier times because of rush hour or bad weather, when demand escalates.  These increased charges must be accepted by the customer using the application, of course.  Moreover, both Kansas City and St Louis won the right to audit the newly authorized services up to twice a year, to alleviate concerns regarding public safety and chiseling on fees.

State lawmakers have a lot on their plates, since Congress appears unlikely to solve more than a few of the 21st century adjustments required to maintain a reasonable level of civilization. It is encouraging that citizen participation in popular disruptive services has produced an environment where many more people on both sides of the transaction can participate with the government’s blessing and oversight.


Image by Nagel Photography

Carbon taxes are about climate issues, not budgets

shutterstock_576266683

A good test for whether politicians are serious about battling climate change, or merely using the problem as an excuse to advance a grab-bag of progressive issues, is to examine what they would do with the revenues collected from a carbon tax.

If the answer involves anything other than offsetting cuts to other taxes, then I suspect the politician’s motives are less than pure. Carbon taxes are not about raising revenue. They are about placing a price on emissions so companies and consumers have incentives to choose lower-emitting options. The goal is to put a price on an “externality” – the economic term for ill side effects that aren’t included in the price of production.

Unfortunately, Washington Gov. Jay Inslee has failed this test with his carbon-tax proposal to help fund the state’s budget. As the Tacoma News Tribune reported in late March: “Not only does Inslee say it would combat climate change, a major priority of the governor’s, but it also would raise $2.1 billion in the next two years to help make court-ordered changes to the public school system and fund construction projects.”

Climate activists routinely warn about the dire consequences for the planet if the public doesn’t get serious about the issue. They also like to harangue global-warming skeptics for their refusal to jump aboard their campaign. Yet when they have the chance to ameliorate the concerns of those with other political views, they fail to do so.

It’s hard to blame skeptics who worry that the global warming fight is mostly about helping the state grab more tax revenue when leaders in that movement make clear they see a carbon tax as a way to help the state grab more tax revenue. Fortunately, Washington legislators from both parties failed to include a carbon tax in their $44.7 billion budget plan, which the newspaper described as a “one-two punch in Inslee’s eyes.”

Carbon-tax proponents believe the tax would internalize the social cost of carbon emissions in a way that’s more efficient and cost-effective than command-and-control regulations. Its purpose is not to fund all sorts of programs or balance the budget. A carbon tax accompanied by cuts in other taxes and paired with reductions in the regulatory burden has the best shot to win over people who suspect the whole thing is a sleight of hand.

Carbon taxes are a hard enough sell when their backers are not looking for a tax grab. On Nov. 4, Washington voters handily defeated Initiative 732, which would have been the first fossil-fuels tax in the nation. The Seattle Times reports, ironically, that “the measure had trouble marshaling consensus among progressive and environmental groups” because of “budgetary and other concerns.” Apparently, they didn’t like that its authors tried to make it revenue neutral.

That’s a sad commentary on the priorities of some activists and politicians, who claim to be urgently alarmed by global warming’s threat to the planet. Voters from across the political spectrum might start to take their dire warnings more seriously when they introduce a carbon tax that is about curbing emissions – not raising taxes to pay for a bunch of programs and subsides. Until then, expect tax-burdened voters to keep giving these proposals a failing grade.


Image by Andre Lefrancois

Are you paying your fair share of taxes?

shutterstock_582177412

The following is a guest post by attorney and freelance writer Mark Meuser.


meuser

Today, many Americans will finalize their federal income tax returns and send their 1040 forms to the Internal Revenue Service to make tomorrow’s Tax Day deadline. Whether you are receiving a refund or will need to send a check to Uncle Sam, if you worked more than 35 hours a week and did not make at least $164,500 in 2016, you will not be paying your fair share of taxes this year. Shame on you.

Obviously, I am joking, but the per-capita burden of federal spending is no laughing matter. In 2016, the federal government spent approximately $12,387.29 per resident of the United States. Some might think that $12,387.29 in taxes sounds reasonable. Under current tax rates, it would mean each and every man, woman and child must earn at least $66,450 to pay his or her fair share.

Obviously, not everyone works or earns anywhere near that amount. Some 47 percent of all Americans are either too young or too old to be gainfully employed full-time. If 100 percent of all Americans between the ages of 25 and 65 were to pay taxes, federal spending would be equivalent to approximately $23,072.30 per working-age adult

But even among the able-bodied, we don’t see 100 percent workforce participation. Whether because of a disability or lack of necessary job skills, or because a parent chooses to stay home and raise their children, some people just don’t work. According to the Bureau of Labor Statistics, there are approximately 100 million Americans over age 25 who work 35 hours a week or more. To cover total federal spending costs, each would need to pay $39,104.43 in taxes for the government to balance its budget. That would require each to have at least $164,500 in individual (not household) earnings per year.

An American’s fair share of government spending has not always been this high. When my grandfather was born 95 years ago, per-capita federal spending was just $30.14 ($437.04, when adjusted for inflation). The run-up in federal spending amounts to a 3,000 percent increase.

All of which raises the question each of us should ask as we send off our tax filings: how much government am I really willing to pay for?


Image by Steve Heap

How cronyism threatens Louisiana’s craft breweries

shutterstock_313883429

Louisiana is well-known for its love of both food and alcohol. The state is a tourist destination for those looking both to enjoy excellent dining and to have a good time. Louisiana’s love affair with food has made its cuisine well-known worldwide. New Orleans’ Mardi Gras festival has few rivals around the world.

Meanwhile, across the country, craft-beer breweries and so-called “gastropubs” have been growing. The craft-beer revolution proceeded at a slower pace in Louisiana, with Abita as one of the few local craft beers to gain national exposure. Much of the reason for this disparity is the hostility the state has shown to brewers, which is in line with its profile generally as a terrible state in which to do business, thanks to its high taxes and crippling regulations. Louisiana has the 12th highest beer excise tax in the country, at 40 cents a gallon. In fact, the tax-hungry state recently raised the fee.

If the tax increase were not enough, the state now is going after craft breweries who also serve food and hold events. Last fall, Louisiana’s craft breweries received “cease and desist” letters and were cited by the Louisiana Office of Alcohol and Tobacco Control for everything from holding yoga classes to serving food. The breweries had been holding those events for years without any complaints, but the ATC suddenly found regulations that limit what breweries could provide on their premises.

The craft brewers got angry and demanded a change in the regulations. In March, the ATC released new rules that, on the surface, would permit many such events. Alas, the devil was in the details.

The ATC ruled that live entertainment was permitted at breweries only so long as it was “not the primary purpose of the facility.” Breweries also could serve food and even charge a cover for some shows. But food sales must be “incidental to the beer sales,” meaning they could not exceed 25 percent of on-premise beer sales. The ATC also banned on-site restaurants from serving alcohol produced off-site. Finally, the ATC ruled that breweries could host fundraisers and events for nonprofits, but they must be a registered 501c(3), c(6) or c(8) and all proceeds from the event must go to the nonprofit.

While the new rules clarify old regulations, they still threaten the existence of craft breweries and gastropubs across the state. NOLA Brewery Co. CEO Kirk Coco told The Advocate that he was concerned about how the regulations would affect his brewery’s recently opened barbecue restaurant, a part of its $1.6 million expansion. Coco also warned of job losses, saying he “would guarantee you that there would be at least three or four closures in the next six months and that’s all jobs.”

Meanwhile, other brewers have threatened to take their operations out of state. One brewer considering leaving Louisiana is Parish Brewing Co. “I am in the process of planning a multimillion dollar expansion and I am considering doing so across the border in Texas or Mississippi if the government is against breweries here,” Parish Brewing owner Andrew Godley told The Advocate.

Craft brewers believe the regulations were issued at the behest of the Louisiana Restaurant Association, which sees breweries as competitors, particularly to sports bars. Instead of going to the Legislature to change the law, entrenched interests merely had to complain to an unaccountable executive-branch agency.

Serving food and holding events is an important part of the craft-brewery business. It helps them build brand recognition and provide jobs for their employees. Louisiana should keep in mind the maxim “do no harm” when they regulate this growing segment of the state economy.


Image by f11photo

Discussing the future of the GSEs on the Investors Unite podcast

I recently joined Investors Unite founder Tim Pagliara on the group’s housing podcast for a broad-ranging discussion about what a future arrangement for Fannie Mae and Freddie Mac might look like. Audio of the full show is embedded below.

R Street launches Justice for Work coalition with April 17 D.C. event

As the bipartisan movement for criminal-justice reform continues to move forward in the states and at the federal level, it’s time to reconsider government-imposed barriers to economic opportunity, such as occupational licensing, mandatory background and biometric checks, and other restrictions on the ability of ex-offenders to find financial stability and meaningful work.

In that vein, R Street will host an April 17 event to announce a new ideologically diverse coalition to highlight the issue of “Justice for Work.” To be held 6 p.m., April 17 at the Stanton & Greene loft (319 Pennsylvania Ave. SE), the launch will be occasioned by an expert panel that includes ex-offenders, former law-enforcement officers, and policy and legal experts. It will be followed by an open-bar social mixer.

We are joined in this new coalition by the American Civil Liberties Union, Right on Crime, Impact Justice, Tech Freedom, FreedomWorks, Americans for Tax Reform and the American Conservative Union Foundation. Together, these members agree that prescriptive mandates may serve a purpose where there is a demonstrated public safety risk that cannot effectively be addressed otherwise. But in areas where access to work is denied solely to signal the empty political slogan of being “tough on crime,” the Justice for Work coalition seeks to make meaningful change.

RSVP here.

The ‘fixed AI’ fallacy

shutterstock_539711092

As Andy Kessler points out in The Wall Street Journal, a tax on robots would hinder entrepreneurial activity in automation and artificial intelligence (AI). The same algorithms that make job-displacing robots smarter and more effective also make us more productive at translating documents, searching for information and streamlining daily tasks. We can’t have our cake and tax it too. As Winston Churchill once said, “I contend that for a nation to try to tax itself into prosperity is like standing in a bucket and trying to lift himself up by the handle.”

Gates and others who bemoan the changing job market fall prey to the fixed pie fallacy—the assumption that available jobs and the wages those jobs pay are fixed quantities. Developments in information technology have led to jobs unimagined by macroeconomists and technologists of previous decades, such as social-media managers, website designers, bloggers and virtual assistants. Crafting policy based on “fixed AI” thinking will prevent new jobs from arising.

Job displacement is an inevitable consequence of technological development and economic growth. Instead of taxing our digital co-workers, thought leaders such as Gates should argue for policy changes that permit experimentation in skills-based education and workplace benefits to better equip workers with the skills and financial flexibility to adapt to the changing jobs market. To realize AI’s full benefits of productivity and convenience, we need to view it as a feature, not a bug, of our tech-imbued future.


Image by Jinning Li

Caleb Watney talks self-driving cars on KVOI

In light of last month’s high-speed crash in Tempe, Arizona, involving a self-driving Uber car (reports say the car had the right of way), R Street Tech Policy Associate Caleb Watney was a guest on Mike Check with Mike Shaw on KVOI-AM in Tucson to discuss the technology and public policy around autonomous vehicles. Audio of the segment is embedded below.

Kosar talks congressional reform on The Golden Mean

R Street Governance Project Director Kevin Kosar recently joined host Michael Golden’s podcast The Golden Mean to discuss the Legislative Branch Capacity Working Group and the prospects for congressional reform. The full show is embedded below.

Holding the administrative state accountable

R Street Senior Fellow Kevin Kosar joined the Manhattan Institute’s Oren Cass and Adam White of The Hoover Institution on the Federalist Society’s podcast to discuss the Legislative Capacity Working Group and efforts to restore Congress’ role as a check on the executive branch. The full show is embedded below.

Throwing cold water on the insurance industry’s dog bite numbers

shutterstock_359352014

Today is National Pet Day, a day to cherish the love, entertainment and fulfillment provided to us by our animal companions. Or, if you’re in the insurance industry, it’s a day to stoke fear of dog bites.

Dog-Bite Claims Surge 18% as Children Bear Brunt of Attacks” reads the headline from Bloomberg, based on a press release from the Insurance Information Institute. Indeed, the III produces a similar release every year, in recognition of National Dog Bite Prevention Week, which runs April 9 to April 15.

The calendar-making gods are sending some decidedly mixed messages.

As is their wont, insurers want to highlight safety, which is a perfectly commendable goal. Dog bites and other pet-related injuries befall thousands of people each year, and better care can and should be taken to mitigate and avoid them. They also constitute a significant portion of the loss costs associated with the liability portion of one’s homeowners insurers policy, which explains the motivation for the public education campaign.

However, when one drills down on the numbers, there’s little to justify the alarmist rhetoric. Dog bites are not “surging” at all.

It first bears noting that liability isn’t actually an especially big ticket item for homeowners insurers. The III notes that the industry paid out $602.2 million in dog-related claims in 2016. That sounds like a lot. But it represents just a tiny portion—just a little more than 1 percent—of the more than $48 billion in claims they paid out, much less the $91.4 billion in direct premiums they collected, according S&P Global’s statutory insurance data.

Also worth mentioning is that, while the headlines tout a rise in dog “bites,” the data actually refer to “dog-related injuries.” If you break your neck after tripping over your shih tzu, that gets included. How often does that happen? A lot. Falls are the number one cause of nonfatal injuries in this country. A 2009 study from the Centers for Disease Control and Prevention found an average of 87,000 fall injuries treated in emergency rooms each year were associated with cats and dogs. Dogs represented 88 percent of the total, or about 76,000 dog-related falls that send Americans to emergency rooms every year.

Of course, that 76,000 figure far exceeds the 18,123 dog-related claims reported by the III, so the vast majority of people who suffer dog-related falls never file a homeowners claim, even if they went to the emergency room. No doubt the same is true of dog bites. Of the claims we know about, what proportion are dog bites and what proportion are other kinds of injuries? We don’t know. The III doesn’t break out those numbers. We do know that dog bites sound scarier than dog falls (even though the latter might actually produce more serious injuries) so it shouldn’t be surprising that’s what gets the headline.

Speaking of headlines, let’s look at Bloomberg’s choice to characterize the rise in dog-related claims as a “surge.” It’s true that claims rose about 18 percent from 15,352 in 2015 to 18,123 in 2016. Is that really a surge? Bear in mind that there are nearly 90 million dogs in the United States. Even if we assume no single dog was responsible for more than one insurance claim, it would still mean only about 0.02 percent of American dogs contributed to an injury that sparked an insurance claim. A difference of less than 3,000 claims per year, in a universe that big, amounts to statistical noise.

But even if we were to take the incredibly small sample size at face value, note that this year’s increase followed back-to-back years when the number of dog-related injury claims declined. From 2013 to 2015, the number of pet-related claims fell 12 percent, from 17,359 to 15,352. But were we treated to headlines about how dog bites had “plummeted?” No. No, we were not.

For that matter, it is just frankly irresponsible to represent these numbers without making basic adjustments for factors like inflation and population growth. The III notes that the average cost of a dog-related claim has risen by 73.4 percent from 2003 to 2016. This would leave one with the impression that pets have become more dangerous or, specifically, that bitey dogs have become more vicious.

But that’s just not true. Of course the average injury claim has gone up since 2003, because the cost of health care has gone up since 2003. Using a medical cost inflation calculator, one would expect the average claim to rise by about 56 percent over that period. Again, dealing with a small sample size, the mix of the kinds of claims in a given year could make the average claim go up by more or less than the baseline cost of medical inflation. Indeed, from 2015 to 2016, the average claim went down by 11 percent.

Even more significant to the overall picture is that neither the III, nor any of the news outlets reporting their findings, make even the slightest effort to put into perspective that, over the long term, the number of claims has been relatively flat, even as the number of people and dogs continues to increase.

According to the III, from 2003 to 2016, the number of dog bites rose by 7 percent, from 16,919 to 18,123. But the population of the United States rose by 11 percent over that same period, from 290.1 million to 322.8 million. And as the chart below makes clear, the population of U.S. dogs surged by a whopping 35 percent.

dogs

So, this actually means both that a declining proportion of Americans are being bit by dogs each year and that a way smaller percentage of dogs are biting (or tripping or what have you) people. In a nutshell we’ve gone from one dog-related injury for every 17,146 people and 3,841 dogs to one for every 17,811 people and 4,949 dogs.

That’s the kind of good news we should be celebrating on National Pet Day.


Image by everydoghasastory

Kevin Kosar at TPA postal reform panel

kosar-postal

R Street Senior Fellow Kevin Kosar took part in a recent Capitol Hill briefing on U.S. Postal Service reform. The panel was hosted by Taxpayers Protection Alliance and also featured representatives of Americans for Tax Reform, the American Consumer Institute and Frontiers of Freedom.

It’s time to kill the Durbin amendment

shutterstock_252948091

After six years of unfulfilled promises, it’s time the Durbin amendment finally was repealed. A last-minute addition to the Dodd-Frank Act—itself a political overreaction to the financial crisis of 2007-2009—the amendment passed without a hearing or adequate discussion of how it would work in practice. We now know it hasn’t worked at all.

Interchange fees are charged by banks to retailers to allow customers to use that bank’s debit card in that store. The Durbin amendment gave the Federal Reserve power to cap those fees, which at the time averaged $0.44 per transaction, for banks with more than $10 billion in assets.

Proponents of the rule hoped that what would have been banks’ revenues would translate instead into lower retail prices for consumers. Indeed, retailers were projected to save an estimated $8 billion yearly. But nearly six years since the price controls went into effect, consumers have not benefited; a fair number, in fact, were made worse off.

The cost savings have, for the most part, become profits for retailers. The Federal Reserve Bank of Richmond found recently that three-quarters of retailers it surveyed did not change prices since interchange fee caps went into effect, and nearly one-quarter actually increased prices.

The Richmond Fed estimates the goal that retailers would pass savings on to customers in the form of lower prices has had an estimated 1.2 percent success rate. These findings are confirmed elsewhere, providing evidence to conclude that consumers experienced effectively no savings at the register.

For any student of history, it should come as no surprise that governments cannot divine the “fair prices” of things. Rent control laws in New York have created enough abandoned housing units to house all of the city’s homeless. Regulation Q, which allowed for government price fixing in deposits, encouraged complex arrangements that discriminated against smaller and less wealthy savers. One can go back as far as ancient Egypt and Babylon to find examples of people not understanding that prices convey economic realities that remain fixed, even after the government changes the prices.

That the Durbin Amendment would suffer the same fate as these other price controls was not hard to predict. To offset revenue losses and remain competitive, banks needed to find ways to raise their deposit account fees. Some did it through higher monthly service charges, while others cut back on free services like checking. A large number of financial institutions—especially small issuers like community banks and credit unions—essentially were pushed out of the competition due to the administrative costs and red tape of various provisions. And all financial institutions saw reduced incentives to innovate in the payment card industry.

As a result, financial markets suffered fewer free checking accounts, fewer debit-card rewards programs, higher costs of entry into financial services and continued reliance on payment networks more susceptible to fraud. These consequences hurt all bank customers, but especially those with lower incomes. Up to 1 million customers were pushed out of the banking system, presumably into the domain of alternative financial providers such as check-cashers and pawnshops.

From the observable consequences, one would be hard-pressed to find the amendment as accomplishing any legitimate objective, other than unintentionally enshrining benefits to particular kinds of retailers. The rule created market distortions that hurt all financial institutions, especially smaller ones, and hurt all depository customers, especially the poor. The Durbin amendment is a case study in how rushing into legislation—without give-and-take deliberation—tends to produce the opposite of what was intended.


Image by alice-photo

Adams talks self-driving cars at Institute for Legal Reform

ian chamber

The threat of litigation could derail the promise of autonomous vehicles to lives. R Street Senior Fellow Ian Adams recently joined a panel hosted by the U.S. Chamber of Commerce’s Institute for Legal Reform to address potential liability issues and allow the technology to achieve its full potential. Video of the full panel is embedded below.

Short-term rentals are an opportunity Missouri can’t afford to miss

shutterstock_270828305

Whether it’s the cars in their garages or the rooms in their homes, Americans are realizing they’re leaving money on the table when their property remains idle. House Bill 608, making its way through the Missouri Legislature, ensures that Missourians are able to take advantage of economic opportunity in the short-term home rental space.

That economic impact of short-term rentals is significant, as one study after another confirms there is growing demand. The National University Institute for Policy Research found that short-term rentals generated a total economic impact of $285 million in San Diego from 2014 to 2015. A study commissioned by Homeaway earlier this year found the economic impact of short-term rentals in Nashville was $477.2 million.

While H.B. 608 isn’t perfect, reasonable statewide standards for short-term rentals make a lot of sense. If Missouri’s legislators want the “gold standard” of short-term-rental laws, Arizona is a good place to start. The Grand Canyon State collects a number of lodging-related taxes on short-term rentals, but prevents cities, towns and counties from restricting short-term rentals simply because the property in use isn’t classified as a hotel.

One of H.B. 608’s more significant departures from the Arizona model is a provision that allows “any county, city, town, village, township, fire district, 10 sewer district, or water district” essentially to ban short-term rentals before April 1, 2018. The bill’s new statewide provisions won’t affect those “political subdivisions” that act before the grandfather date. This might create an incentive for local governments to race to restrict short-term rentals, simply to retain the option to do so in the future.

Oddly, that’s one of the chief problems that a commonsense short-term-rental law should correct. Missourians across the state should have the same basic opportunity to generate additional income with their properties, not a patchwork of local ordinances that grant opportunity to some and remove it from others.

I’ve seen firsthand how short-term rentals benefit the little guy. On a trip to Charleston last year, I crossed paths with an Uber driver who used the income from short-term rentals in his basement to purchase his car. That’s not some corporation horning in on a neighborhood; it’s the American dream of being able to work hard and succeed, using every tool at your disposal.

Having a transparent and predictable legal foundation for short-term rentals at the state level probably means H.B. 608 is worth supporting, even with the grandfathering provisions. But the Missouri Legislature would make a better choice by ensuring the economic opportunity of short-term rentals is open to all its citizens.


Image by f11photo

Ohio and Indiana take different approaches to opioid epidemic

shutterstock_593324582

Ohio Gov. John Kasich described drug addiction as “a common enemy” in this week’s state-of-the-state address. Kasich highlighted the challenge in terms similar to those laid out by state Speaker of the House Cliff Rosenberger, R-Clarksville, back in January when members were sworn in. But there does not yet appear to be regional consensus on how to engage this blight on civilization.

Just imagine what kind of relief Ohio could be afforded in health care, where most of $1 billion in state and federal Medicaid addiction-treatment funds go, if this problem were to be resolved. Nearly as large a cost, in terms of both wasted lives and government expenditures, stems from corrections programs for drug abusers. The costs in education, housing, social services and workplace productivity are incalculable.

As J.D. Vance, author of last year’s bestseller Hillbilly Elegy: A Memoir of a Family and Culture in Crisis, pointed out last week in a keynote address to the Federalist Society in Columbus, a policy that works has got to do something about the addict, but also for the aunts, uncles and grandmothers who shoulder the burden of child care for a mother who has succumbed to a drug overdose. In 2015, Ohio led the nation in this tragic category with 3,050, and the 2016 total may have topped 4,000. As reported in the Columbus Dispatch story linked above, Senate Minority Leader Joe Schiavoni, D-Boardman, noted at this week’s state-of-the-state joint session of the Legislature that two Ohio counties have had to rent refrigerator trucks to handle the surge in the number of corpses from lives snuffed out by overdose.

The first of Kasich’s major proposals to tackle the issue is a $20 million grant fund to accelerate treatment programs and technologies that promise to serve as useful tools in the fight against drug abuse. The money would come from the Ohio Third Frontier Commission, which votes to dole out bond proceeds for 21st century innovations. The idea is that these resources might bring some breakthrough addition-mitigating technology to market that otherwise would stall out due to lack of funding.

Currently, prescriptions for pain medications can be written for 30-90 days. According to the Ohio Department of Health, nearly 800 million doses of pain pills were prescribed in Ohio in 2012, although the Dispatch noted that general awareness of the overdose problem has helped curb that figure to about 631 million doses last year.

Number of opioid doses dispensed to Ohio patients, 2011-2015

ohio opioids

The governor’s second proposal is that prescriptions be limited to shorter terms—seven days for adults and five days for minors with acute pain, but not chronic conditions. Doctors could use their judgment to exceed these dosage guidelines, if they document the reasons. Apparently, the Ohio Medical Board, Dental Board and Ohio Boards of Pharmacy and Nursing will all have to sign off on the proposed legislation.

Next door in Indiana, a legislative proposal passed on the House floor this week gives up on the modern approaches to criminal justice, which include giving judges more discretion and preferring treatment over punishment. S.B. 324 instead aims to crack down on heroin dealers and those who rob pharmacies, increasing the severity of the penalties for dealing and lessening the judiciary’s discretion in sentencing. Critics argue the Legislature is “backsliding” to previous, failed attempts to address the drug epidemic, but the bill was approved by a huge 72-18 margin.

As state Rep. Ed Delaney, D-Indianapolis, noted before the vote, taking away judges’ discretion means giving more discretion to prosecutors, which isn’t an unalloyed good in the current criminal justice landscape. Even though nearly all lawmakers agree with the proposition that the goal of incarceration is to deal with the “people we are afraid of, and not the people we are mad at,” it proves difficult to convince them not to be more afraid of drug dealers than rapists and armed robbers.

I can’t yet fault the approach in either state, since all serious policymakers are at their wits’ end about the drug problem. But I have to root for Ohio’s search for innovative breakthroughs. Opioid abuse affects many precious lives, careers and billions in government expenditures, as mentioned above.

Perhaps it is time for a serious discussion of the ameliorative potential of marijuana extracts for pain relief. According to public opinion polls, most Americans would like to give medical marijuana a chance to prove its value. However, this is a place where there is a clear conflict between not just science and law, but two distinct sets of cultural values.


Image by Steve Heap

Toward a global norm against manipulating integrity of financial data

shutterstock_388551307

The following is a guest post by Tim Maurer, who co-directs the Cyber Policy Initiative at the Carnegie Endowment for International Peace, and Steven Nyikos, research analyst at the Carnegie Endowment for International Peace.


The February 2016 theft of $81 million from Bangladesh’s central bank, which recent reports suggest may have been perpetrated by agents of North Korea, demonstrated the scale of risk that malicious hackers pose to financial institutions.

Cyberattacks to manipulate the integrity of financial data pose a distinct set of systemic risks. While a cyberattack on an electrical grid, for example, will be mostly limited to a single country’s territory or its immediate neighbors, the effects of an attack on the financial system are not bound by geography. Such attacks could lead to bankruptcies that, in turn, send shock waves throughout the global system.

The G-20 finance ministers and central bank governors recognized the threat in a March 18 communiqué:

The malicious use of Information and Communication Technologies (ICT) could disrupt financial services crucial to both national and international financial systems, undermine security and confidence and endanger financial stability.

Now the G20 heads of state have an opportunity to take further action. A new white paper by the Carnegie Endowment for International Peace proposes the G-20 heads of state explicitly commit not to undermine the integrity of financial institutions’ data—whether in peacetime or during war—or allow their nationals to do so, and to cooperate with the international community when such attacks do occur.

Most states already demonstrate restraint when it comes to cyberattacks that could compromise the integrity of financial institutions’ data. By making such restraint explicit, they could:

  • Send a clear signal that global financial stability depends on preserving the integrity of financial data and that the international community considers attacks on that integrity off limits;
  • Build confidence among states that restraint in this domain is already the norm and thereby make it easier to mobilize the international community when that norm is violated;
  • Foster greater international collaboration to tackle nonstate actors who target financial institutions with cyber-enabled means; and
  • Complement and enhance existing agreements and efforts, namely the 2015 G-20 communiqué, the 2015 UNGGE report and the 2016 cyber guidance from the Committee on Payments and Market Infrastructures and the International Organization of Securities Commissions (CPMI-IOSCO).

The agreement proposed in the Carnegie white paper would commit states not to conduct or knowingly support any activity that intentionally manipulates the integrity of financial institutions’ data and algorithms, wherever they are stored or when in transit. It also binds states, to the extent permitted by law, to respond to requests by other states to assist in halting cyberattacks that target financial institutions’ data and algorithms and that either pass through or emanate from the state in question.

Elements of the proposed agreement are mutually reinforcing. The commitment by states to provide assistance and information, upon request, shifts the burden of attribution from the victim of attack to states that have professed interest in helping to respond to and ultimately prevent such attacks. Linking an agreement on state restraint with expectations for the private sector to implement due-diligence standards addresses potential moral-hazard problems.

The agreement would build on existing international law and on recent international efforts to develop rules for cyberspace. These include the 2015 report of the U.N. Group of Governmental Experts, which proclaimed:

States must not use proxies to commit internationally wrongful acts using ICTs, and should seek to ensure that their territory is not used by non-State actors to commit such acts.

The G-20 heads of state could advance this norm powerfully, building on the finance ministers’ statement, by articulating it formally when they meet in July.

Of course, in the 21st century, a few states that are relatively cut off from the global economy, and nonstate actors who may or may not be affiliated with them, could conduct cyberattacks against financial institutions. But states that endorse the norm explicitly would be more united and would have a clear basis to demand potential retaliatory action against violators—be they states, terrorists or cybercriminals.


Image by vectorfusionart

Vivek Murthy on vaping and public health

R Street Associate Fellow Damon L. Jacobs attended the recent National Council for Behavioral Health’s NatCon conference in Seattle, where he got to ask Surgeon General Vivek Murthy to weigh in on the role vaping could play in harm reduction and public health. Video of the exchange is embedded below.

Juvenile justice legislation now moves to U.S. House floor

shutterstock_132550763

A decade after Congress allowed the Juvenile Justice and Delinquency Prevention Act’s authorization to expire, legislation to reauthorize the bill is moving to the House floor following today’s successful markup by the Committee on Education and the Workforce.

First authorized in 1974, the JJDPA has been an important tool in protecting children who are in custody of the criminal-justice system. Based on broad consensus standards of care, the law ensures that children held for “status offenses”—that is, those that are only illegal because they were committed by someone under the age of majority—can’t be held in jails or prisons unless the child also committed a criminal offense. Another crucial provision of the law requires that, if a child is to be detained, there must be a “sight and sound” separation from adult offenders.

The JJDPA has not been reauthorized since it expired in 2007. The current House bill is the Juvenile Justice Reform Act, introduced last week by Reps. Bobby Scott, D-Va., and Jason Lewis, R-Minn. A Senate companion is expected to be introduced next week.

While one should always bear federalism concerns in mind when the federal government sets out standards for issues that clearly are in the states’ purview, it’s encouraging that the JJDPA is back on Congress’ agenda. This is an important piece of legislation that helps ensure children are protected and gives them the opportunity to grow and flourish in their communities.


Image by niceregionpics

Watney talks digital privacy with Chad Benson

R Street Research Assistant Caleb Watney was a guest recently on Radio America’s “The Chad Benson Show” to discuss Congress’ recent move to vacate Federal Communications Commission privacy rules using the Congressional Review Act.  The full interview is embedded below.