Questions on Efficacy and Bias in Pretrial Justice Algorithms Continues with Two More Scholarly Articles

National Survey Data Shows that the Public Disfavors Risk Algorithms in Criminal Justice

June 28, 2018


With public policy issues shifting so quickly, policy-makers are often caught in a bit of a pickle when the assumptions upon which many key policies and ideas are based on begin to fall by the wayside in the middle of the debate.  Risk assessments in criminal justice certainly has its place among these questionable policy debates.

The idea that we can use a system of comprehensive risk assessments throughout the criminal justice system to free it from bias and make it more fair is a concept that is being exposed as a critical problem in the criminal justice system.  These assessments are secretly profiling people, which should come as no surprise, and then engaging in category discrimination—that is, we create categories to put people in so we can thus label them. We score people on a numerical system (1, 2, 3, 4, 5, or 6), imposing a modern-day risk scarlet letter based on static historical data that defines the pre-packaged response to their behavior from the question of bail to sentencing and probation and even parole.

The move to replace the so-called “cash bail system” with a risk-based system is not a new idea.  The idea that we would use these check-box risk assessments as a basis to make these important decisions, is also not new.  What is new is that scholars are realizing that the risk-based regimes are fundamentally flawed, and as we have previously noted, may be a significant driver in generational mass incarceration.

As policy-makers consider bail reform, and the merits or issues to going to this new algorithmic justice, they may want to instead take a look at two new forthcoming scholarly articles that again spell out many problems that left unaddressed could make the criminal justice system even worse.

In a forthcoming article in the Duke Law Journal, author Aziz Hug summarized the state of affairs in risk assessments:

From the cotton gin to the camera phone, new technologies have scrambled, invigorated, and refashioned the terms on which the state coerces. Today, we are in the midst of another major reconfiguration of state coercion. Police, criminal courts, and parole boards across the country are turning to sophisticated algorithmic instruments to guide decisions about the ‘where,’ ‘whom,’ and ‘when’ of law enforcement. The new predictive algorithms trawl immense quantities of data, exploit massive computational power, and leverage advances in machine-learning technologies to generate predictions no human would conjure. These tools are likely to have enduring effects on the criminal justice system. Yet law remains far behind in thinking through the difficult questions that arise when algorithmic logic substitutes for human discretion.

Another key point Hug makes is that there is no generally accepted method to evaluate whether a risk assessment is biased.  This calls into question the entire risk-based regime.

In a separate publication last week, A.J. Wang at Yale Law School released an important draft that will likely be bound for publication.  Based on a nationally-representative sample, he determined that the public disfavors the use of these algorithms even when they are perceived as transparent:

Statistical algorithms are increasingly used in the criminal justice system. Much of the recent scholarship on the use of these algorithms have focused on their “fairness,” typically defined as accuracy across groups like race or gender. This project draws on the procedural justice literature to raise a separate concern: does the use of algorithms damage the perceived fairness and legitimacy of the criminal justice system? Through three original survey experiments on a nationally-representative sample, it shows that the public strongly disfavors algorithms as a matter of fairness, policy, and legitimacy. While respondents generally believe algorithms to be less accurate than either of these methods, accuracy alone does not explain their preferences. Creating “transparent” algorithms helps but is not enough to make algorithms desirable in their own right. Both surprising and troubling, members of the public seem more willing to tolerate disparate outcomes when they stem from an algorithm than a psychologist.

As part of the survey, Wang began his article with some sharply worded responses, calling the process of risk assessment “stupid” among other things:

An algorithm cannot take into account factors such as human emotion and need. It is stupid to allow a mathematical equation, the value of which is only as useful as the data it utilizes to operate, to determine the fate of a being that possesses free will. That’s outright absurd, stupid, and dangerous.

I think using the human element is most fair and humane. Guidelines would be second best, but seem to lack some humanity. An algorithm is the most cold.

People are not statistics. Someone’s fate shouldn’t be determined by an algorithm.

– Survey respondents, assessing the use of algorithms in bail hearings.

Not exactly a ringing public endorsement of a greater shift to algorithmic justice.

We need not go too far into the past to figure out that groups typically viewed as progressive, like the ACLU, opposed the passage of the Federal Bail Reform Act of 1984, which is similar to the risk-based system in California’s proposed Senate Bill 10 and similar  legislation being considered in other states.   Ira Glasser of the ACLU testified against that act, in particular arguing that the “clear answer is no,” as to if we can predict risk accurately.  Said Glasser in 1981:

Thus, there is no way to imprison people based on behavioral predictions except at the price of liberty of many who would not be dangerous and not commit a new crime if released.

As we have said, the heart of the bail reform movement is the use of risk algorithms to decide who gets bail and what the bail should be.  As the arc of history begins to bend, it might be time for policy-makers to think long and hard about whether spending millions – or even billions – to expand the criminal justice system into an archaic computerized roulette table is worth it – all while victims and justice hang in the balance.

8 thoughts on “Questions on Efficacy and Bias in Pretrial Justice Algorithms Continues with Two More Scholarly Articles

Comments are closed.

Call for Help