Social justice and big data are both incredibly popular in California, yet you’d be hard pressed to find many examples of either in its criminal justice system. Lawmakers sought to remedy this paradox by replacing cash bail with pretrial risk assessments only to face unexpected pushback from some reformers. The measure, which Governor Jerry Brown signed into law in August, has since been stalled by a bail bond industry-backed 2020 ballot referendum.

It’s hard to overstate the stakes of a bail hearing or the difficulty of the bail decision. In the blink of an eye, a judge has to calculate which defendants can go free and which cannot. A wrong decision could mean an innocent person languishing in a jail cell unnecessarily or a dangerous one walking free. It’s not hard, then, to see the allure of predictive analytics like pretrial risk assessments, which offer a chance to add some element of science to the art of bail decisions.

Of course, as the controversy erupting around these tools shows, the devil truly is in the details — specifically, the fact that all of the data driving these assessments derive from a broken and discriminatory justice system. The old mathematical maxim ‘garbage in, garbage out’ has thus been bandied about with increasing frequency as reformers worry that these tools will simply carry the sins of the past into the future.

But perhaps there’s a simpler way to use data to improve these decisions. What if instead of concentrating on providing data on the public to judges, we gave the public more data on the judges? In other words, to reduce human error, let’s look at the humans themselves.

Although the idea of judicial discretion is championed by nearly all parties — even proponents of cash bail highlight it as a primary reason for maintaining the status quo — there is precious little data on whether any given judge’s discretion is worthy of such deference. At best, you’re likely to find only a categorical “qualified-not qualified” rating from your local bar association that does little to describe how the judge rules on actual cases.

This gaping hole in the data is all the more surreal considering what we know about the judicial community writ large. For example, judges are more likely to set a cash bail on a black defendant and that bail is, in turn, higher on average than bails set on white defendants. If that’s how the ‘average’ judge is deciding bails, what about the below-average one?

Case data would not only better illuminate disturbing trends at the system level but could identify which judges are the ones doing the most to drive them. Pretrial decisions broken down by demographics would reveal the potential biases — both subconscious and conscious — of the judges making them. A judge found to make particularly skewed decisions could be sent for remedial training or even removed from office.

In addition to helping ensure that different defendants receive equal justice before the same judge, this data could reduce the variation in outcomes between judges. It’s hardly a mark of equal justice, for example, for one judge to detain less than 5 percent of defendantswhile a neighboring judge detains almost 25 percent. With this kind of case data in hand, however, outlier judges could be identified and addressed accordingly so that justice is determined by the facts of a case and not the luck of the judicial draw.

This information would be beneficial regardless of how Californians vote in 2020. If they elect to continue the move to risk assessments, gathering this data could serve as a critical oversight measure to ensure that judges implement the assessment tools uniformly and as intended, and do not detain defendants inequitably or unnecessarily. Likewise, if cash bail wins the day, we could use this data to tackle at least some of the inequities in our current assessment of cash bail.

The bail decision is too fraught, and human beings too flawed, for us not to consider how data analytics could improve the pretrial process. Before we get lost in how we could use data to try and forecast how defendants might act, however, we’d be well served to use it to evaluate how judges are actually acting. After all, a tool is only as useful as the person who wields it.

Featured Publications