From 💰Crowd Money:

Word on the (R) Street

We spoke with Mary Brooks, a senior research associate for Cybersecurity and Emerging Threats at R Street Institute, about this issue for further insights. Mary wrote an article on prediction markets and policy for Lawfare in July, and was generous enough to share her observations with us as she continues to conduct research and write content on the growing space:

We’ve seen relatively few examples of governments soliciting crowd-forecasting to help guide policy making. Three things need to be defined before this can happen at scale.

First, the information taken from a market should be fairly reliable. Otherwise it’s just more noise.

Second, policymakers need to decide how they can responsibly use crowd-source-derived information. No one is ever going to say, ‘Well the market thinks this policy will be created so let’s do it.’ Instead, a market might be used to get more information about an issue, or to corroborate inputs and compare against other information sources.

Finally, the market must be structured so that someone with an agenda can’t skew results toward the outcome they want–even though the causal relationship between the market and the policy will likely never be so direct. The market needs to reassure users and the public that answers can’t be manipulated to influence policy outcomes.

As Mary illustrates above, the state of forecasting and prediction market platforms today are not necessarily conducive to policy making. This is not to say that information from prediction markets cannot be used to make decisions, but rather that the platforms and dialogue around forecasting do not lend themselves to action.